You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2020/02/03 19:28:13 UTC

[GitHub] [hadoop] jojochuang opened a new pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

jojochuang opened a new pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829
 
 
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-XXXXX. Fix a typo in YYY.)
   For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
steveloughran commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#discussion_r386483778
 
 

 ##########
 File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
 ##########
 @@ -81,6 +81,20 @@ public void checkPermission(String fsOwner, String supergroup,
         }
         CALLED.add("checkPermission|" + ancestorAccess + "|" + parentAccess + "|" + access);
       }
+
+      @Override
+      public void checkPermissionWithContext(
+          AuthorizationContext authzContext) throws AccessControlException {
+        if (authzContext.ancestorIndex > 1
+            && authzContext.inodes[1].getLocalName().equals("user")
+            && authzContext.inodes[2].getLocalName().equals("acl")) {
+          this.ace.checkPermissionWithContext(authzContext);
+        }
+        CALLED.add("checkPermission|" + authzContext.ancestorAccess + "|" +
+            authzContext.parentAccess + "|" + authzContext.access);
+      }
+
+      public void abc() {}
 
 Review comment:
   what does this do?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#discussion_r377978605
 
 

 ##########
 File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
 ##########
 @@ -17,19 +17,227 @@
  */
 package org.apache.hadoop.hdfs.server.namenode;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.hdfs.DFSUtil;
+import org.apache.hadoop.ipc.CallerContext;
 import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.UserGroupInformation;
 
 @InterfaceAudience.Public
 @InterfaceStability.Unstable
 public abstract class INodeAttributeProvider {
 
+  public static class AuthorizationContext {
+    public String fsOwner;
+    public String supergroup;
+    public UserGroupInformation callerUgi;
+    public INodeAttributes[] inodeAttrs;
+    public INode[] inodes;
+    public byte[][] pathByNameArr;
+    public int snapshotId;
+    public String path;
+    public int ancestorIndex;
+    public boolean doCheckOwner;
+    public FsAction ancestorAccess;
+    public FsAction parentAccess;
+    public FsAction access;
+    public FsAction subAccess;
+    public boolean ignoreEmptyDir;
+    public String operationName;
+    public CallerContext callerContext;
+
+    public static class Builder {
+      public String fsOwner;
+      public String supergroup;
+      public UserGroupInformation callerUgi;
+      public INodeAttributes[] inodeAttrs;
+      public INode[] inodes;
+      public byte[][] pathByNameArr;
+      public int snapshotId;
+      public String path;
+      public int ancestorIndex;
+      public boolean doCheckOwner;
+      public FsAction ancestorAccess;
+      public FsAction parentAccess;
+      public FsAction access;
+      public FsAction subAccess;
+      public boolean ignoreEmptyDir;
+      public String operationName;
+      public CallerContext callerContext;
+
+      public AuthorizationContext build() {
+        return new AuthorizationContext(this);
+      }
+
+      public Builder fsOwner(String val) {
+        this.fsOwner = val;
+        return this;
+      }
+
+      public Builder supergroup(String val) {
+        this.supergroup = val;
+        return this;
+      }
+
+      public Builder callerUgi(UserGroupInformation val) {
+        this.callerUgi = val;
+        return this;
+      }
+
+      public Builder inodeAttrs(INodeAttributes[] val) {
+        this.inodeAttrs = val;
+        return this;
+      }
+
+      public Builder inodes(INode[] val) {
+        this.inodes = val;
+        return this;
+      }
+
+      public Builder pathByNameArr(byte[][] val) {
+        this.pathByNameArr = val;
+        return this;
+      }
+
+      public Builder snapshotId(int val) {
+        this.snapshotId = val;
+        return this;
+      }
+
+      public Builder path(String val) {
+        this.path = val;
+        return this;
+      }
+
+      public Builder ancestorIndex(int val) {
+        this.ancestorIndex = val;
+        return this;
+      }
+
+      public Builder doCheckOwner(boolean val) {
+        this.doCheckOwner = val;
+        return this;
+      }
+
+      public Builder ancestorAccess(FsAction val) {
+        this.ancestorAccess = val;
+        return this;
+      }
+
+      public Builder parentAccess(FsAction val) {
+        this.parentAccess = val;
+        return this;
+      }
+
+      public Builder access(FsAction val) {
+        this.access = val;
+        return this;
+      }
+
+      public Builder subAccess(FsAction val) {
+        this.subAccess = val;
+        return this;
+      }
+
+      public Builder ignoreEmptyDir(boolean val) {
+        this.ignoreEmptyDir = val;
+        return this;
+      }
+
+      public Builder operationName(String val) {
+        this.operationName = val;
+        return this;
+      }
+
+      public Builder callerContext(CallerContext val) {
+        this.callerContext = val;
+        return this;
+      }
+    }
+
+    public AuthorizationContext(
+        String fsOwner,
+        String supergroup,
+        UserGroupInformation callerUgi,
+        INodeAttributes[] inodeAttrs,
+        INode[] inodes,
+        byte[][] pathByNameArr,
+        int snapshotId,
+        String path,
+        int ancestorIndex,
+        boolean doCheckOwner,
+        FsAction ancestorAccess,
+        FsAction parentAccess,
+        FsAction access,
+        FsAction subAccess,
+        boolean ignoreEmptyDir) {
+      this.fsOwner = fsOwner;
+      this.supergroup = supergroup;
+      this.callerUgi = callerUgi;
+      this.inodeAttrs = inodeAttrs;
+      this.inodes = inodes;
+      this.pathByNameArr = pathByNameArr;
+      this.snapshotId = snapshotId;
+      this.path = path;
+      this.ancestorIndex = ancestorIndex;
+      this.doCheckOwner = doCheckOwner;
+      this.ancestorAccess = ancestorAccess;
+      this.parentAccess = parentAccess;
+      this.access = access;
+      this.subAccess = subAccess;
+      this.ignoreEmptyDir = ignoreEmptyDir;
+    }
+
+    public AuthorizationContext(
+        String fsOwner,
+        String supergroup,
+        UserGroupInformation callerUgi,
+        INodeAttributes[] inodeAttrs,
+        INode[] inodes,
+        byte[][] pathByNameArr,
+        int snapshotId,
+        String path,
+        int ancestorIndex,
+        boolean doCheckOwner,
+        FsAction ancestorAccess,
+        FsAction parentAccess,
+        FsAction access,
+        FsAction subAccess,
+        boolean ignoreEmptyDir,
+        String operationName,
+        CallerContext callerContext) {
+      this(fsOwner, supergroup, callerUgi, inodeAttrs, inodes,
+          pathByNameArr, snapshotId, path, ancestorIndex, doCheckOwner,
+          ancestorAccess, parentAccess, access, subAccess, ignoreEmptyDir);
+      this.operationName = operationName;
 
 Review comment:
   can we have only one constructor with the all the parameters to avoid the operationName/callerContext assign in multiple places?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#discussion_r391311899
 
 

 ##########
 File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 ##########
 @@ -1982,6 +1982,7 @@ void setPermission(String src, FsPermission permission) throws IOException {
     FileStatus auditStat;
     checkOperation(OperationCategory.WRITE);
     final FSPermissionChecker pc = getPermissionChecker();
+    FSPermissionChecker.setOperationType(operationName);
 
 Review comment:
   Thanks @xiaoyuyao for the review. 
   * FSDirSymlinkOp#createSymlinkInt() is an exception. It doesn't check permission in the FSNamesystem so missed this one. Added.
   
   * NameNodeAdapter#getFileInfo() is used only in tests. 
   * NamenodeFsck#getBlockLocations() --> call it fsckGetBlockLocations to distinguish it from regular open operations.
   * FSNDNCache#addCacheDirective/removeCacheDirective/modifyCacheDirective/listCacheDirectives/listCachePools --> done
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] jojochuang commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
jojochuang commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-596714923
 
 
   The javac warning is because of the deprecation added in the code.
   
   The javadoc warning looks like a false positive to me.
   findbugs: fixed.
   
   getter/setter: used Intellij to assist this part

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#discussion_r377977917
 
 

 ##########
 File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
 ##########
 @@ -17,19 +17,227 @@
  */
 package org.apache.hadoop.hdfs.server.namenode;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.hdfs.DFSUtil;
+import org.apache.hadoop.ipc.CallerContext;
 import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.UserGroupInformation;
 
 @InterfaceAudience.Public
 @InterfaceStability.Unstable
 public abstract class INodeAttributeProvider {
 
+  public static class AuthorizationContext {
+    public String fsOwner;
+    public String supergroup;
+    public UserGroupInformation callerUgi;
+    public INodeAttributes[] inodeAttrs;
+    public INode[] inodes;
+    public byte[][] pathByNameArr;
+    public int snapshotId;
+    public String path;
+    public int ancestorIndex;
+    public boolean doCheckOwner;
+    public FsAction ancestorAccess;
+    public FsAction parentAccess;
+    public FsAction access;
+    public FsAction subAccess;
+    public boolean ignoreEmptyDir;
+    public String operationName;
+    public CallerContext callerContext;
+
+    public static class Builder {
+      public String fsOwner;
+      public String supergroup;
+      public UserGroupInformation callerUgi;
+      public INodeAttributes[] inodeAttrs;
+      public INode[] inodes;
+      public byte[][] pathByNameArr;
+      public int snapshotId;
+      public String path;
+      public int ancestorIndex;
+      public boolean doCheckOwner;
+      public FsAction ancestorAccess;
+      public FsAction parentAccess;
+      public FsAction access;
+      public FsAction subAccess;
+      public boolean ignoreEmptyDir;
+      public String operationName;
+      public CallerContext callerContext;
+
+      public AuthorizationContext build() {
+        return new AuthorizationContext(this);
+      }
+
+      public Builder fsOwner(String val) {
+        this.fsOwner = val;
+        return this;
+      }
+
+      public Builder supergroup(String val) {
+        this.supergroup = val;
+        return this;
+      }
+
+      public Builder callerUgi(UserGroupInformation val) {
+        this.callerUgi = val;
+        return this;
+      }
+
+      public Builder inodeAttrs(INodeAttributes[] val) {
+        this.inodeAttrs = val;
+        return this;
+      }
+
+      public Builder inodes(INode[] val) {
+        this.inodes = val;
+        return this;
+      }
+
+      public Builder pathByNameArr(byte[][] val) {
+        this.pathByNameArr = val;
+        return this;
+      }
+
+      public Builder snapshotId(int val) {
+        this.snapshotId = val;
+        return this;
+      }
+
+      public Builder path(String val) {
+        this.path = val;
+        return this;
+      }
+
+      public Builder ancestorIndex(int val) {
+        this.ancestorIndex = val;
+        return this;
+      }
+
+      public Builder doCheckOwner(boolean val) {
+        this.doCheckOwner = val;
+        return this;
+      }
+
+      public Builder ancestorAccess(FsAction val) {
+        this.ancestorAccess = val;
+        return this;
+      }
+
+      public Builder parentAccess(FsAction val) {
+        this.parentAccess = val;
+        return this;
+      }
+
+      public Builder access(FsAction val) {
+        this.access = val;
+        return this;
+      }
+
+      public Builder subAccess(FsAction val) {
+        this.subAccess = val;
+        return this;
+      }
+
+      public Builder ignoreEmptyDir(boolean val) {
+        this.ignoreEmptyDir = val;
+        return this;
+      }
+
+      public Builder operationName(String val) {
+        this.operationName = val;
+        return this;
+      }
+
+      public Builder callerContext(CallerContext val) {
+        this.callerContext = val;
+        return this;
+      }
+    }
+
+    public AuthorizationContext(
+        String fsOwner,
+        String supergroup,
+        UserGroupInformation callerUgi,
+        INodeAttributes[] inodeAttrs,
+        INode[] inodes,
+        byte[][] pathByNameArr,
+        int snapshotId,
+        String path,
+        int ancestorIndex,
+        boolean doCheckOwner,
+        FsAction ancestorAccess,
+        FsAction parentAccess,
+        FsAction access,
+        FsAction subAccess,
+        boolean ignoreEmptyDir) {
+      this.fsOwner = fsOwner;
+      this.supergroup = supergroup;
+      this.callerUgi = callerUgi;
+      this.inodeAttrs = inodeAttrs;
+      this.inodes = inodes;
+      this.pathByNameArr = pathByNameArr;
+      this.snapshotId = snapshotId;
+      this.path = path;
+      this.ancestorIndex = ancestorIndex;
+      this.doCheckOwner = doCheckOwner;
+      this.ancestorAccess = ancestorAccess;
+      this.parentAccess = parentAccess;
+      this.access = access;
+      this.subAccess = subAccess;
+      this.ignoreEmptyDir = ignoreEmptyDir;
+    }
+
+    public AuthorizationContext(
+        String fsOwner,
+        String supergroup,
+        UserGroupInformation callerUgi,
+        INodeAttributes[] inodeAttrs,
+        INode[] inodes,
+        byte[][] pathByNameArr,
+        int snapshotId,
+        String path,
+        int ancestorIndex,
+        boolean doCheckOwner,
+        FsAction ancestorAccess,
+        FsAction parentAccess,
+        FsAction access,
+        FsAction subAccess,
+        boolean ignoreEmptyDir,
+        String operationName,
+        CallerContext callerContext) {
+      this(fsOwner, supergroup, callerUgi, inodeAttrs, inodes,
+          pathByNameArr, snapshotId, path, ancestorIndex, doCheckOwner,
+          ancestorAccess, parentAccess, access, subAccess, ignoreEmptyDir);
+      this.operationName = operationName;
+      this.callerContext = callerContext;
+    }
+
+    public AuthorizationContext(Builder builder) {
+      this(builder.fsOwner, builder.supergroup, builder.callerUgi,
+          builder.inodeAttrs, builder.inodes, builder.pathByNameArr,
+          builder.snapshotId, builder.path, builder.ancestorIndex,
+          builder.doCheckOwner, builder.ancestorAccess, builder.parentAccess,
+          builder.access, builder.subAccess, builder.ignoreEmptyDir);
+      this.operationName = builder.operationName;
+      this.callerContext = builder.callerContext;
+    }
+
+    @VisibleForTesting
+    @Override
+    public boolean equals(Object obj) {
+      if (!(obj instanceof AuthorizationContext)) {
+        return false;
+      }
+      return true;
 
 Review comment:
   I don't think it is right to return always return true for two different AuthorizationContext instances.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] tasanuma commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
tasanuma commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-610123825
 
 
   Hi @jojochuang,
   NameNode still generates many log messages of `Default authorization provider supports the new authorization provider API`. Do you plan to fix it?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] jojochuang commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
jojochuang commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-597438454
 
 
   javac and javadoc warnings can be ignored. Unit test failure is unrelated.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#discussion_r381477751
 
 

 ##########
 File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
 ##########
 @@ -17,19 +17,227 @@
  */
 package org.apache.hadoop.hdfs.server.namenode;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.hdfs.DFSUtil;
+import org.apache.hadoop.ipc.CallerContext;
 import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.UserGroupInformation;
 
 @InterfaceAudience.Public
 @InterfaceStability.Unstable
 public abstract class INodeAttributeProvider {
 
+  public static class AuthorizationContext {
+    public String fsOwner;
+    public String supergroup;
+    public UserGroupInformation callerUgi;
+    public INodeAttributes[] inodeAttrs;
+    public INode[] inodes;
+    public byte[][] pathByNameArr;
+    public int snapshotId;
+    public String path;
+    public int ancestorIndex;
+    public boolean doCheckOwner;
+    public FsAction ancestorAccess;
+    public FsAction parentAccess;
+    public FsAction access;
+    public FsAction subAccess;
+    public boolean ignoreEmptyDir;
+    public String operationName;
+    public CallerContext callerContext;
+
+    public static class Builder {
+      public String fsOwner;
+      public String supergroup;
+      public UserGroupInformation callerUgi;
+      public INodeAttributes[] inodeAttrs;
+      public INode[] inodes;
+      public byte[][] pathByNameArr;
+      public int snapshotId;
+      public String path;
+      public int ancestorIndex;
+      public boolean doCheckOwner;
+      public FsAction ancestorAccess;
+      public FsAction parentAccess;
+      public FsAction access;
+      public FsAction subAccess;
+      public boolean ignoreEmptyDir;
+      public String operationName;
+      public CallerContext callerContext;
+
+      public AuthorizationContext build() {
+        return new AuthorizationContext(this);
+      }
+
+      public Builder fsOwner(String val) {
+        this.fsOwner = val;
+        return this;
+      }
+
+      public Builder supergroup(String val) {
+        this.supergroup = val;
+        return this;
+      }
+
+      public Builder callerUgi(UserGroupInformation val) {
+        this.callerUgi = val;
+        return this;
+      }
+
+      public Builder inodeAttrs(INodeAttributes[] val) {
+        this.inodeAttrs = val;
+        return this;
+      }
+
+      public Builder inodes(INode[] val) {
+        this.inodes = val;
+        return this;
+      }
+
+      public Builder pathByNameArr(byte[][] val) {
+        this.pathByNameArr = val;
+        return this;
+      }
+
+      public Builder snapshotId(int val) {
+        this.snapshotId = val;
+        return this;
+      }
+
+      public Builder path(String val) {
+        this.path = val;
+        return this;
+      }
+
+      public Builder ancestorIndex(int val) {
+        this.ancestorIndex = val;
+        return this;
+      }
+
+      public Builder doCheckOwner(boolean val) {
+        this.doCheckOwner = val;
+        return this;
+      }
+
+      public Builder ancestorAccess(FsAction val) {
+        this.ancestorAccess = val;
+        return this;
+      }
+
+      public Builder parentAccess(FsAction val) {
+        this.parentAccess = val;
+        return this;
+      }
+
+      public Builder access(FsAction val) {
+        this.access = val;
+        return this;
+      }
+
+      public Builder subAccess(FsAction val) {
+        this.subAccess = val;
+        return this;
+      }
+
+      public Builder ignoreEmptyDir(boolean val) {
+        this.ignoreEmptyDir = val;
+        return this;
+      }
+
+      public Builder operationName(String val) {
+        this.operationName = val;
+        return this;
+      }
+
+      public Builder callerContext(CallerContext val) {
+        this.callerContext = val;
+        return this;
+      }
+    }
+
+    public AuthorizationContext(
+        String fsOwner,
+        String supergroup,
+        UserGroupInformation callerUgi,
+        INodeAttributes[] inodeAttrs,
+        INode[] inodes,
+        byte[][] pathByNameArr,
+        int snapshotId,
+        String path,
+        int ancestorIndex,
+        boolean doCheckOwner,
+        FsAction ancestorAccess,
+        FsAction parentAccess,
+        FsAction access,
+        FsAction subAccess,
+        boolean ignoreEmptyDir) {
+      this.fsOwner = fsOwner;
+      this.supergroup = supergroup;
+      this.callerUgi = callerUgi;
+      this.inodeAttrs = inodeAttrs;
+      this.inodes = inodes;
+      this.pathByNameArr = pathByNameArr;
+      this.snapshotId = snapshotId;
+      this.path = path;
+      this.ancestorIndex = ancestorIndex;
+      this.doCheckOwner = doCheckOwner;
+      this.ancestorAccess = ancestorAccess;
+      this.parentAccess = parentAccess;
+      this.access = access;
+      this.subAccess = subAccess;
+      this.ignoreEmptyDir = ignoreEmptyDir;
+    }
+
+    public AuthorizationContext(
+        String fsOwner,
+        String supergroup,
+        UserGroupInformation callerUgi,
+        INodeAttributes[] inodeAttrs,
+        INode[] inodes,
+        byte[][] pathByNameArr,
+        int snapshotId,
+        String path,
+        int ancestorIndex,
+        boolean doCheckOwner,
+        FsAction ancestorAccess,
+        FsAction parentAccess,
+        FsAction access,
+        FsAction subAccess,
+        boolean ignoreEmptyDir,
+        String operationName,
+        CallerContext callerContext) {
+      this(fsOwner, supergroup, callerUgi, inodeAttrs, inodes,
+          pathByNameArr, snapshotId, path, ancestorIndex, doCheckOwner,
+          ancestorAccess, parentAccess, access, subAccess, ignoreEmptyDir);
+      this.operationName = operationName;
 
 Review comment:
   removed this constructor. It is not used.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-581601343
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 38s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m  1s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 11s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 51s |  branch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m 14s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  9s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 32s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 33s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  javac  |   0m 33s |  hadoop-hdfs in the patch failed.  |
   | -0 :warning: |  checkstyle  |   0m 46s |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 27 new + 245 unchanged - 0 fixed = 272 total (was 245)  |
   | -1 :x: |  mvnsite  |   0m 36s |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | -1 :x: |  shadedclient  |   3m 46s |  patch has errors when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   1m 14s |  hadoop-hdfs-project_hadoop-hdfs generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0)  |
   | -1 :x: |  findbugs  |   0m 36s |  hadoop-hdfs in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 36s |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 27s |  The patch does not generate ASF License warnings.  |
   |  |   |  55m  1s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/1/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6ed8ff3e7720 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1e3a0b0 |
   | Default Java | 1.8.0_242 |
   | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt |
   | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt |
   | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt |
   | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/1/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt |
   | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/1/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt |
   | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/1/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt |
   | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/1/artifact/out/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
   |  Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/1/testReport/ |
   | Max. process+thread count | 303 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#discussion_r381478346
 
 

 ##########
 File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
 ##########
 @@ -17,19 +17,227 @@
  */
 package org.apache.hadoop.hdfs.server.namenode;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.hdfs.DFSUtil;
+import org.apache.hadoop.ipc.CallerContext;
 import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.UserGroupInformation;
 
 @InterfaceAudience.Public
 @InterfaceStability.Unstable
 public abstract class INodeAttributeProvider {
 
+  public static class AuthorizationContext {
+    public String fsOwner;
+    public String supergroup;
+    public UserGroupInformation callerUgi;
+    public INodeAttributes[] inodeAttrs;
+    public INode[] inodes;
+    public byte[][] pathByNameArr;
+    public int snapshotId;
+    public String path;
+    public int ancestorIndex;
+    public boolean doCheckOwner;
+    public FsAction ancestorAccess;
+    public FsAction parentAccess;
+    public FsAction access;
+    public FsAction subAccess;
+    public boolean ignoreEmptyDir;
+    public String operationName;
+    public CallerContext callerContext;
+
+    public static class Builder {
+      public String fsOwner;
+      public String supergroup;
+      public UserGroupInformation callerUgi;
+      public INodeAttributes[] inodeAttrs;
+      public INode[] inodes;
+      public byte[][] pathByNameArr;
+      public int snapshotId;
+      public String path;
+      public int ancestorIndex;
+      public boolean doCheckOwner;
+      public FsAction ancestorAccess;
+      public FsAction parentAccess;
+      public FsAction access;
+      public FsAction subAccess;
+      public boolean ignoreEmptyDir;
+      public String operationName;
+      public CallerContext callerContext;
+
+      public AuthorizationContext build() {
+        return new AuthorizationContext(this);
+      }
+
+      public Builder fsOwner(String val) {
+        this.fsOwner = val;
+        return this;
+      }
+
+      public Builder supergroup(String val) {
+        this.supergroup = val;
+        return this;
+      }
+
+      public Builder callerUgi(UserGroupInformation val) {
+        this.callerUgi = val;
+        return this;
+      }
+
+      public Builder inodeAttrs(INodeAttributes[] val) {
+        this.inodeAttrs = val;
+        return this;
+      }
+
+      public Builder inodes(INode[] val) {
+        this.inodes = val;
+        return this;
+      }
+
+      public Builder pathByNameArr(byte[][] val) {
+        this.pathByNameArr = val;
+        return this;
+      }
+
+      public Builder snapshotId(int val) {
+        this.snapshotId = val;
+        return this;
+      }
+
+      public Builder path(String val) {
+        this.path = val;
+        return this;
+      }
+
+      public Builder ancestorIndex(int val) {
+        this.ancestorIndex = val;
+        return this;
+      }
+
+      public Builder doCheckOwner(boolean val) {
+        this.doCheckOwner = val;
+        return this;
+      }
+
+      public Builder ancestorAccess(FsAction val) {
+        this.ancestorAccess = val;
+        return this;
+      }
+
+      public Builder parentAccess(FsAction val) {
+        this.parentAccess = val;
+        return this;
+      }
+
+      public Builder access(FsAction val) {
+        this.access = val;
+        return this;
+      }
+
+      public Builder subAccess(FsAction val) {
+        this.subAccess = val;
+        return this;
+      }
+
+      public Builder ignoreEmptyDir(boolean val) {
+        this.ignoreEmptyDir = val;
+        return this;
+      }
+
+      public Builder operationName(String val) {
+        this.operationName = val;
+        return this;
+      }
+
+      public Builder callerContext(CallerContext val) {
+        this.callerContext = val;
+        return this;
+      }
+    }
+
+    public AuthorizationContext(
+        String fsOwner,
+        String supergroup,
+        UserGroupInformation callerUgi,
+        INodeAttributes[] inodeAttrs,
+        INode[] inodes,
+        byte[][] pathByNameArr,
+        int snapshotId,
+        String path,
+        int ancestorIndex,
+        boolean doCheckOwner,
+        FsAction ancestorAccess,
+        FsAction parentAccess,
+        FsAction access,
+        FsAction subAccess,
+        boolean ignoreEmptyDir) {
+      this.fsOwner = fsOwner;
+      this.supergroup = supergroup;
+      this.callerUgi = callerUgi;
+      this.inodeAttrs = inodeAttrs;
+      this.inodes = inodes;
+      this.pathByNameArr = pathByNameArr;
+      this.snapshotId = snapshotId;
+      this.path = path;
+      this.ancestorIndex = ancestorIndex;
+      this.doCheckOwner = doCheckOwner;
+      this.ancestorAccess = ancestorAccess;
+      this.parentAccess = parentAccess;
+      this.access = access;
+      this.subAccess = subAccess;
+      this.ignoreEmptyDir = ignoreEmptyDir;
+    }
+
+    public AuthorizationContext(
+        String fsOwner,
+        String supergroup,
+        UserGroupInformation callerUgi,
+        INodeAttributes[] inodeAttrs,
+        INode[] inodes,
+        byte[][] pathByNameArr,
+        int snapshotId,
+        String path,
+        int ancestorIndex,
+        boolean doCheckOwner,
+        FsAction ancestorAccess,
+        FsAction parentAccess,
+        FsAction access,
+        FsAction subAccess,
+        boolean ignoreEmptyDir,
+        String operationName,
+        CallerContext callerContext) {
+      this(fsOwner, supergroup, callerUgi, inodeAttrs, inodes,
+          pathByNameArr, snapshotId, path, ancestorIndex, doCheckOwner,
+          ancestorAccess, parentAccess, access, subAccess, ignoreEmptyDir);
+      this.operationName = operationName;
+      this.callerContext = callerContext;
+    }
+
+    public AuthorizationContext(Builder builder) {
+      this(builder.fsOwner, builder.supergroup, builder.callerUgi,
+          builder.inodeAttrs, builder.inodes, builder.pathByNameArr,
+          builder.snapshotId, builder.path, builder.ancestorIndex,
+          builder.doCheckOwner, builder.ancestorAccess, builder.parentAccess,
+          builder.access, builder.subAccess, builder.ignoreEmptyDir);
+      this.operationName = builder.operationName;
+      this.callerContext = builder.callerContext;
+    }
+
+    @VisibleForTesting
+    @Override
+    public boolean equals(Object obj) {
+      if (!(obj instanceof AuthorizationContext)) {
+        return false;
+      }
+      return true;
 
 Review comment:
   yes... it was meant only for test. Updated the patch to include the full object equivalence check.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#discussion_r377977497
 
 

 ##########
 File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
 ##########
 @@ -68,6 +277,8 @@ public abstract void checkPermission(String fsOwner, String supergroup,
         boolean ignoreEmptyDir)
             throws AccessControlException;
 
+    void checkPermissionWithContext(AuthorizationContext authzContext)
 
 Review comment:
   NIT: can you add javadoc for the new public method?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] vinayakumarb commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
vinayakumarb commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-600195268
 
 
   After this change... 
   
   Namenode logs are getting flooded with below logs.
   ```
   2020-03-17 17:18:29,102 INFO org.apache.hadoop.security.UserGroupInformation: Default authorization provider supports the new authorization provider API
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#discussion_r391313005
 
 

 ##########
 File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuthorizationContext.java
 ##########
 @@ -0,0 +1,167 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import org.apache.hadoop.ipc.CallerContext;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+
+import static junit.framework.TestCase.assertEquals;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+public class TestAuthorizationContext {
+
+  private String fsOwner = "hdfs";
+  private String superGroup = "hdfs";
+  private UserGroupInformation ugi = UserGroupInformation.
+      createUserForTesting(fsOwner, new String[] {superGroup});
+
+  private INodeAttributes[] emptyINodeAttributes = new INodeAttributes[] {};
+  private INodesInPath iip = mock(INodesInPath.class);
+  private int snapshotId = 0;
+  private INode[] inodes = new INode[] {};
+  private byte[][] components = new byte[][] {};
+  private String path = "";
+  private int ancestorIndex = inodes.length - 2;
+
+  @Before
+  public void setUp() throws IOException {
+    when(iip.getPathSnapshotId()).thenReturn(snapshotId);
+    when(iip.getINodesArray()).thenReturn(inodes);
+    when(iip.getPathComponents()).thenReturn(components);
+    when(iip.getPath()).thenReturn(path);
+  }
+
+  @Test
+  public void testBuilder() {
+    String opType = "test";
+    CallerContext.setCurrent(new CallerContext.Builder(
+        "TestAuthorizationContext").build());
+
+    INodeAttributeProvider.AuthorizationContext.Builder builder =
+        new INodeAttributeProvider.AuthorizationContext.Builder();
+    builder.fsOwner(fsOwner).
+        supergroup(superGroup).
+        callerUgi(ugi).
+        inodeAttrs(emptyINodeAttributes).
+        inodes(inodes).
+        pathByNameArr(components).
+        snapshotId(snapshotId).
+        path(path).
+        ancestorIndex(ancestorIndex).
+        doCheckOwner(true).
+        ancestorAccess(null).
+        parentAccess(null).
+        access(null).
+        subAccess(null).
+        ignoreEmptyDir(true).
+        operationName(opType).
+        callerContext(CallerContext.getCurrent());
+
+    INodeAttributeProvider.AuthorizationContext authzContext = builder.build();
+    assertEquals(authzContext.getFsOwner(), fsOwner);
+    assertEquals(authzContext.getSupergroup(), superGroup);
+    assertEquals(authzContext.getCallerUgi(), ugi);
+    assertEquals(authzContext.getInodeAttrs(), emptyINodeAttributes);
+    assertEquals(authzContext.getInodes(), inodes);
+    assertEquals(authzContext.getPathByNameArr(), components);
+    assertEquals(authzContext.getSnapshotId(), snapshotId);
+    assertEquals(authzContext.getPath(), path);
+    assertEquals(authzContext.getAncestorIndex(), ancestorIndex);
+    assertEquals(authzContext.getOperationName(), opType);
+    assertEquals(authzContext.getCallerContext(), CallerContext.getCurrent());
+  }
+
+  @Test
+  public void testLegacyAPI() throws IOException {
+    INodeAttributeProvider.AccessControlEnforcer
+        mockEnforcer = mock(INodeAttributeProvider.AccessControlEnforcer.class);
+    INodeAttributeProvider mockINodeAttributeProvider =
+        mock(INodeAttributeProvider.class);
+    when(mockINodeAttributeProvider.getExternalAccessControlEnforcer(any())).
+        thenReturn(mockEnforcer);
+
+    FSPermissionChecker checker = new FSPermissionChecker(
+        fsOwner, superGroup, ugi, mockINodeAttributeProvider);
 
 Review comment:
   this is covered by existing tests when FSDirectory initializes a FSPermissionChecker, so this is good.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-584453114
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 42s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 37s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 11s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 24s |  branch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m  6s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  3s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  the patch passed  |
   | -1 :x: |  javac  |   1m  2s |  hadoop-hdfs-project_hadoop-hdfs generated 8 new + 580 unchanged - 0 fixed = 588 total (was 580)  |
   | -0 :warning: |  checkstyle  |   0m 45s |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 60 new + 245 unchanged - 0 fixed = 305 total (was 245)  |
   | +1 :green_heart: |  mvnsite  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 41s |  patch has no errors when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 43s |  hadoop-hdfs-project_hadoop-hdfs generated 1 new + 100 unchanged - 0 fixed = 101 total (was 100)  |
   | -1 :x: |  findbugs  |   3m 32s |  hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  89m  6s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  The patch does not generate ASF License warnings.  |
   |  |   | 162m 51s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider$AuthorizationContext defines equals and uses Object.hashCode()  At INodeAttributeProvider.java:Object.hashCode()  At INodeAttributeProvider.java:[lines 234-237] |
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
   |   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/3/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6a227c3f2b1c 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d5467d2 |
   | Default Java | 1.8.0_242 |
   | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/3/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt |
   | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/3/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt |
   | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/3/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt |
   | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/3/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
   |  Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/3/testReport/ |
   | Max. process+thread count | 3232 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-596744180
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 26s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  23m 57s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 14s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 33s |  branch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m  3s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  1s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 39s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 36s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  javac  |   0m 36s |  hadoop-hdfs in the patch failed.  |
   | -0 :warning: |  checkstyle  |   0m 43s |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 29 new + 243 unchanged - 0 fixed = 272 total (was 243)  |
   | -1 :x: |  mvnsite  |   0m 37s |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | -1 :x: |  shadedclient  |   4m  3s |  patch has errors when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 39s |  hadoop-hdfs-project_hadoop-hdfs generated 1 new + 100 unchanged - 0 fixed = 101 total (was 100)  |
   | -1 :x: |  findbugs  |   0m 39s |  hadoop-hdfs in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 41s |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 27s |  The patch does not generate ASF License warnings.  |
   |  |   |  59m 46s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.7 Server=19.03.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/9/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b02219c73c78 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 44afe11 |
   | Default Java | 1.8.0_242 |
   | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/9/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt |
   | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/9/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt |
   | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/9/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt |
   | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/9/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt |
   | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/9/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt |
   | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/9/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt |
   | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/9/artifact/out/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
   |  Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/9/testReport/ |
   | Max. process+thread count | 307 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-596834319
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 22s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 41s |  branch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m 12s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 10s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  the patch passed  |
   | -1 :x: |  javac  |   1m  5s |  hadoop-hdfs-project_hadoop-hdfs generated 6 new + 579 unchanged - 0 fixed = 585 total (was 579)  |
   | -0 :warning: |  checkstyle  |   0m 49s |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 29 new + 243 unchanged - 0 fixed = 272 total (was 243)  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | -1 :x: |  shadedclient  |   7m 52s |  patch has errors when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   1m 13s |  hadoop-hdfs-project_hadoop-hdfs generated 1 new + 100 unchanged - 0 fixed = 101 total (was 100)  |
   | +1 :green_heart: |  findbugs  |   4m 42s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 147m  4s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  The patch does not generate ASF License warnings.  |
   |  |   | 216m 51s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
   |   | hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
   |   | hadoop.hdfs.TestDFSClientRetries |
   |   | hadoop.hdfs.TestDecommission |
   |   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
   |   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
   |   | hadoop.hdfs.TestReplication |
   |   | hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport |
   |   | hadoop.hdfs.TestHFlush |
   |   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
   |   | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
   |   | hadoop.hdfs.TestWriteReadStripedFile |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.7 Server=19.03.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/10/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2094776425d7 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 44afe11 |
   | Default Java | 1.8.0_242 |
   | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/10/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt |
   | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/10/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt |
   | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/10/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
   |  Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/10/testReport/ |
   | Max. process+thread count | 2687 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/10/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] jojochuang commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
jojochuang commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-600198558
 
 
   Thanks @vinayakumarb. I am looking into it now.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-588509498
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 45s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 50s |  branch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m 25s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 23s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  the patch passed  |
   | -1 :x: |  javac  |   1m 14s |  hadoop-hdfs-project_hadoop-hdfs generated 6 new + 580 unchanged - 0 fixed = 586 total (was 580)  |
   | -0 :warning: |  checkstyle  |   0m 45s |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 54 new + 245 unchanged - 0 fixed = 299 total (was 245)  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m  3s |  patch has no errors when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 39s |  hadoop-hdfs-project_hadoop-hdfs generated 1 new + 100 unchanged - 0 fixed = 101 total (was 100)  |
   | -1 :x: |  findbugs  |   4m 40s |  hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 144m 22s |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  asflicense  |   0m 37s |  The patch generated 1 ASF License warnings.  |
   |  |   | 221m  3s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider$AuthorizationContext defines equals and uses Object.hashCode()  At INodeAttributeProvider.java:Object.hashCode()  At INodeAttributeProvider.java:[lines 211-217] |
   | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData |
   |   | hadoop.hdfs.TestDeadNodeDetection |
   |   | hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands |
   |   | hadoop.hdfs.server.namenode.TestLargeDirectoryDelete |
   |   | hadoop.hdfs.server.namenode.ha.TestLossyRetryInvocationHandler |
   |   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
   |   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
   |   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
   |   | hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication |
   |   | hadoop.hdfs.TestReadStripedFileWithDNFailure |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
   |   | hadoop.hdfs.server.namenode.TestCacheDirectives |
   |   | hadoop.hdfs.server.namenode.TestUpgradeDomainBlockPlacementPolicy |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
   |   | hadoop.hdfs.TestMaintenanceState |
   |   | hadoop.hdfs.server.namenode.TestEditLogAutoroll |
   |   | hadoop.hdfs.server.namenode.TestFSImage |
   |   | hadoop.hdfs.server.namenode.ha.TestStandbyIsHot |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/5/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 366e61eb090f 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3f1aad0 |
   | Default Java | 1.8.0_242 |
   | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/5/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt |
   | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/5/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt |
   | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/5/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt |
   | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/5/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
   |  Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/5/testReport/ |
   | asflicense | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/5/artifact/out/patch-asflicense-problems.txt |
   | Max. process+thread count | 3761 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-588542045
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m  8s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 16s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 42s |  branch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m 16s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 14s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  the patch passed  |
   | -1 :x: |  javac  |   1m  3s |  hadoop-hdfs-project_hadoop-hdfs generated 6 new + 580 unchanged - 0 fixed = 586 total (was 580)  |
   | -0 :warning: |  checkstyle  |   0m 44s |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 54 new + 245 unchanged - 0 fixed = 299 total (was 245)  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  16m 38s |  patch has no errors when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 40s |  hadoop-hdfs-project_hadoop-hdfs generated 1 new + 100 unchanged - 0 fixed = 101 total (was 100)  |
   | -1 :x: |  findbugs  |   3m 27s |  hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  29m  1s |  hadoop-hdfs in the patch passed.  |
   | +0 :ok: |  asflicense  |   0m 40s |  ASF License check generated no output?  |
   |  |   | 103m 21s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider$AuthorizationContext defines equals and uses Object.hashCode()  At INodeAttributeProvider.java:Object.hashCode()  At INodeAttributeProvider.java:[lines 211-217] |
   | Failed junit tests | hadoop.hdfs.TestDFSClientFailover |
   |   | hadoop.hdfs.TestWriteRead |
   |   | hadoop.hdfs.client.impl.TestBlockReaderFactory |
   |   | hadoop.hdfs.TestEncryptionZones |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/6/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ce99c1da1660 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec75071 |
   | Default Java | 1.8.0_242 |
   | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/6/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt |
   | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/6/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt |
   | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/6/artifact/out/whitespace-eol.txt |
   | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/6/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt |
   | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/6/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
   |  Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/6/testReport/ |
   | Max. process+thread count | 2609 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] jojochuang commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
jojochuang commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-589836552
 
 
   The javac warnings are expected since we deprecate the original API and add tests for them.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-598022742
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 14s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 3 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 41s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 33s |  branch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m  2s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 58s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  the patch passed  |
   | -1 :x: |  javac  |   1m  3s |  hadoop-hdfs-project_hadoop-hdfs generated 6 new + 579 unchanged - 0 fixed = 585 total (was 579)  |
   | -0 :warning: |  checkstyle  |   0m 44s |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 339 unchanged - 0 fixed = 345 total (was 339)  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 43s |  patch has no errors when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 43s |  hadoop-hdfs-project_hadoop-hdfs generated 1 new + 100 unchanged - 0 fixed = 101 total (was 100)  |
   | +1 :green_heart: |  findbugs  |   3m 18s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 108m  2s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  The patch does not generate ASF License warnings.  |
   |  |   | 181m 53s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/15/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 23bd4bf9dade 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b931f3 |
   | Default Java | 1.8.0_242 |
   | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/15/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt |
   | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/15/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt |
   | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/15/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt |
   |  Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/15/testReport/ |
   | Max. process+thread count | 2876 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/15/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] jojochuang commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
jojochuang commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-598867910
 
 
   Thanks @xiaoyuyao for your thorough review. Learned a lot through your review.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-598858049
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m  6s |  https://github.com/apache/hadoop/pull/1829 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1829 |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/16/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#discussion_r391173121
 
 

 ##########
 File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 ##########
 @@ -1982,6 +1982,7 @@ void setPermission(String src, FsPermission permission) throws IOException {
     FileStatus auditStat;
     checkOperation(OperationCategory.WRITE);
     final FSPermissionChecker pc = getPermissionChecker();
+    FSPermissionChecker.setOperationType(operationName);
 
 Review comment:
   There are other places that need to be patched with setOperationType After HDFS-7416 refactor, not all permission check is done in FSN.
   
   Here is the list of missed ones:
   FSDirSymlinkOp#createSymlinkInt()
   NameNodeAdapter#getFileInfo()
   NamenodeFsck#getBlockLocations()
   FSNDNCache#addCacheDirective/removeCacheDirective/modifyCacheDirective/listCacheDirectives/listCachePools

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#discussion_r391313005
 
 

 ##########
 File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuthorizationContext.java
 ##########
 @@ -0,0 +1,167 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import org.apache.hadoop.ipc.CallerContext;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+
+import static junit.framework.TestCase.assertEquals;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+public class TestAuthorizationContext {
+
+  private String fsOwner = "hdfs";
+  private String superGroup = "hdfs";
+  private UserGroupInformation ugi = UserGroupInformation.
+      createUserForTesting(fsOwner, new String[] {superGroup});
+
+  private INodeAttributes[] emptyINodeAttributes = new INodeAttributes[] {};
+  private INodesInPath iip = mock(INodesInPath.class);
+  private int snapshotId = 0;
+  private INode[] inodes = new INode[] {};
+  private byte[][] components = new byte[][] {};
+  private String path = "";
+  private int ancestorIndex = inodes.length - 2;
+
+  @Before
+  public void setUp() throws IOException {
+    when(iip.getPathSnapshotId()).thenReturn(snapshotId);
+    when(iip.getINodesArray()).thenReturn(inodes);
+    when(iip.getPathComponents()).thenReturn(components);
+    when(iip.getPath()).thenReturn(path);
+  }
+
+  @Test
+  public void testBuilder() {
+    String opType = "test";
+    CallerContext.setCurrent(new CallerContext.Builder(
+        "TestAuthorizationContext").build());
+
+    INodeAttributeProvider.AuthorizationContext.Builder builder =
+        new INodeAttributeProvider.AuthorizationContext.Builder();
+    builder.fsOwner(fsOwner).
+        supergroup(superGroup).
+        callerUgi(ugi).
+        inodeAttrs(emptyINodeAttributes).
+        inodes(inodes).
+        pathByNameArr(components).
+        snapshotId(snapshotId).
+        path(path).
+        ancestorIndex(ancestorIndex).
+        doCheckOwner(true).
+        ancestorAccess(null).
+        parentAccess(null).
+        access(null).
+        subAccess(null).
+        ignoreEmptyDir(true).
+        operationName(opType).
+        callerContext(CallerContext.getCurrent());
+
+    INodeAttributeProvider.AuthorizationContext authzContext = builder.build();
+    assertEquals(authzContext.getFsOwner(), fsOwner);
+    assertEquals(authzContext.getSupergroup(), superGroup);
+    assertEquals(authzContext.getCallerUgi(), ugi);
+    assertEquals(authzContext.getInodeAttrs(), emptyINodeAttributes);
+    assertEquals(authzContext.getInodes(), inodes);
+    assertEquals(authzContext.getPathByNameArr(), components);
+    assertEquals(authzContext.getSnapshotId(), snapshotId);
+    assertEquals(authzContext.getPath(), path);
+    assertEquals(authzContext.getAncestorIndex(), ancestorIndex);
+    assertEquals(authzContext.getOperationName(), opType);
+    assertEquals(authzContext.getCallerContext(), CallerContext.getCurrent());
+  }
+
+  @Test
+  public void testLegacyAPI() throws IOException {
+    INodeAttributeProvider.AccessControlEnforcer
+        mockEnforcer = mock(INodeAttributeProvider.AccessControlEnforcer.class);
+    INodeAttributeProvider mockINodeAttributeProvider =
+        mock(INodeAttributeProvider.class);
+    when(mockINodeAttributeProvider.getExternalAccessControlEnforcer(any())).
+        thenReturn(mockEnforcer);
+
+    FSPermissionChecker checker = new FSPermissionChecker(
+        fsOwner, superGroup, ugi, mockINodeAttributeProvider);
 
 Review comment:
   this is covered by existing tests when FSDirectory initializes a FSPermissionChecker.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#discussion_r391179551
 
 

 ##########
 File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
 ##########
 @@ -68,6 +391,16 @@ public abstract void checkPermission(String fsOwner, String supergroup,
         boolean ignoreEmptyDir)
             throws AccessControlException;
 
+    /**
+     * Checks permission on a file system object. Has to throw an Exception
+     * if the filesystem object is not accessessible by the calling Ugi.
 
 Review comment:
   NIT: typo: accessessible

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#discussion_r381478385
 
 

 ##########
 File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
 ##########
 @@ -68,6 +277,8 @@ public abstract void checkPermission(String fsOwner, String supergroup,
         boolean ignoreEmptyDir)
             throws AccessControlException;
 
+    void checkPermissionWithContext(AuthorizationContext authzContext)
 
 Review comment:
   done.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] xiaoyuyao commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
xiaoyuyao commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-598553328
 
 
   +1. Thanks @jojochuang  for the update. There are whitespace related checkstyle issue which you can fix at commit. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] jojochuang merged pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
jojochuang merged pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829
 
 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-597979746
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 46s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 3 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  24m 12s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 29s |  branch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m 34s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 31s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  the patch passed  |
   | -1 :x: |  javac  |   1m 13s |  hadoop-hdfs-project_hadoop-hdfs generated 6 new + 579 unchanged - 0 fixed = 585 total (was 579)  |
   | -0 :warning: |  checkstyle  |   0m 50s |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 339 unchanged - 0 fixed = 345 total (was 339)  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  1s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 16s |  patch has no errors when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 41s |  hadoop-hdfs-project_hadoop-hdfs generated 1 new + 100 unchanged - 0 fixed = 101 total (was 100)  |
   | +1 :green_heart: |  findbugs  |   3m 30s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 110m 53s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  The patch does not generate ASF License warnings.  |
   |  |   | 192m 47s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot |
   |   | hadoop.hdfs.TestDecommissionWithStriped |
   |   | hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.web.TestWebHDFS |
   |   | hadoop.hdfs.TestWriteRead |
   |   | hadoop.hdfs.TestAclsEndToEnd |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
   |   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
   |   | hadoop.hdfs.TestDecommission |
   |   | hadoop.hdfs.TestWriteReadStripedFile |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/14/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6e3684b16c0f 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b931f3 |
   | Default Java | 1.8.0_242 |
   | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/14/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt |
   | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/14/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt |
   | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/14/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/14/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
   |  Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/14/testReport/ |
   | Max. process+thread count | 4794 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/14/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-584480385
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   2m 23s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  27m 58s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 20s |  branch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m 40s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 36s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  the patch passed  |
   | -1 :x: |  javac  |   1m 15s |  hadoop-hdfs-project_hadoop-hdfs generated 6 new + 580 unchanged - 0 fixed = 586 total (was 580)  |
   | -0 :warning: |  checkstyle  |   0m 52s |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 55 new + 245 unchanged - 0 fixed = 300 total (was 245)  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 51s |  patch has no errors when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 52s |  hadoop-hdfs-project_hadoop-hdfs generated 1 new + 100 unchanged - 0 fixed = 101 total (was 100)  |
   | -1 :x: |  findbugs  |   4m 21s |  hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 126m 12s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  The patch does not generate ASF License warnings.  |
   |  |   | 216m 38s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider$AuthorizationContext defines equals and uses Object.hashCode()  At INodeAttributeProvider.java:Object.hashCode()  At INodeAttributeProvider.java:[lines 234-237] |
   | Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestDeadNodeDetection |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.TestWriteReadStripedFile |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
   |   | hadoop.hdfs.TestPersistBlocks |
   |   | hadoop.hdfs.TestErasureCodingPolicies |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/4/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 20fcfe1c7a85 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d5467d2 |
   | Default Java | 1.8.0_242 |
   | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/4/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt |
   | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/4/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt |
   | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/4/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt |
   | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/4/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
   |  Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/4/testReport/ |
   | Max. process+thread count | 2822 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-596880220
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 22s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 26s |  branch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m 11s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  7s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  the patch passed  |
   | -1 :x: |  javac  |   1m  7s |  hadoop-hdfs-project_hadoop-hdfs generated 6 new + 579 unchanged - 0 fixed = 585 total (was 579)  |
   | -0 :warning: |  checkstyle  |   0m 45s |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 243 unchanged - 0 fixed = 249 total (was 243)  |
   | +1 :green_heart: |  mvnsite  |   1m 12s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 17s |  patch has no errors when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 38s |  hadoop-hdfs-project_hadoop-hdfs generated 1 new + 100 unchanged - 0 fixed = 101 total (was 100)  |
   | -1 :x: |  findbugs  |   3m 13s |  hadoop-hdfs-project/hadoop-hdfs generated 15 new + 0 unchanged - 0 fixed = 15 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  90m 59s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  The patch does not generate ASF License warnings.  |
   |  |   | 163m 14s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  Unread field:INodeAttributeProvider.java:[line 276] |
   |  |  Unread field:INodeAttributeProvider.java:[line 266] |
   |  |  Unread field:INodeAttributeProvider.java:[line 256] |
   |  |  Unread field:INodeAttributeProvider.java:[line 226] |
   |  |  Unread field:INodeAttributeProvider.java:[line 261] |
   |  |  Unread field:INodeAttributeProvider.java:[line 216] |
   |  |  Unread field:INodeAttributeProvider.java:[line 286] |
   |  |  Unread field:INodeAttributeProvider.java:[line 231] |
   |  |  Unread field:INodeAttributeProvider.java:[line 236] |
   |  |  Unread field:INodeAttributeProvider.java:[line 271] |
   |  |  Unread field:INodeAttributeProvider.java:[line 251] |
   |  |  Unread field:INodeAttributeProvider.java:[line 241] |
   |  |  Unread field:INodeAttributeProvider.java:[line 246] |
   |  |  Unread field:INodeAttributeProvider.java:[line 281] |
   |  |  Unread field:INodeAttributeProvider.java:[line 221] |
   | Failed junit tests | hadoop.hdfs.TestAclsEndToEnd |
   |   | hadoop.hdfs.web.TestFSMainOperationsWebHdfs |
   |   | hadoop.hdfs.TestQuota |
   |   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
   |   | hadoop.hdfs.server.namenode.TestGetContentSummaryWithPermission |
   |   | hadoop.hdfs.web.TestWebHDFSAcl |
   |   | hadoop.hdfs.TestFileAppend2 |
   |   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
   |   | hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead |
   |   | hadoop.hdfs.server.namenode.TestFsck |
   |   | hadoop.hdfs.TestFsShellPermission |
   |   | hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands |
   |   | hadoop.hdfs.TestDFSPermission |
   |   | hadoop.hdfs.server.namenode.TestAuthorizationContext |
   |   | hadoop.hdfs.TestSafeMode |
   |   | hadoop.hdfs.server.namenode.TestAuditLogs |
   |   | hadoop.fs.TestGlobPaths |
   |   | hadoop.hdfs.server.namenode.TestFileTruncate |
   |   | hadoop.hdfs.web.TestWebHDFS |
   |   | hadoop.hdfs.TestDistributedFileSystem |
   |   | hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot |
   |   | hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
   |   | hadoop.security.TestPermission |
   |   | hadoop.hdfs.server.namenode.TestNameNodeXAttr |
   |   | hadoop.hdfs.server.namenode.TestFileContextAcl |
   |   | hadoop.hdfs.server.namenode.TestAuditLogger |
   |   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs |
   |   | hadoop.hdfs.TestReservedRawPaths |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.TestExtendedAcls |
   |   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
   |   | hadoop.security.TestPermissionSymlinks |
   |   | hadoop.hdfs.server.namenode.TestFileContextXAttr |
   |   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
   |   | hadoop.hdfs.TestErasureCodingPolicies |
   |   | hadoop.hdfs.TestDFSClientRetries |
   |   | hadoop.hdfs.tools.TestDFSAdmin |
   |   | hadoop.hdfs.TestSecureEncryptionZoneWithKMS |
   |   | hadoop.hdfs.TestEncryptionZones |
   |   | hadoop.hdfs.TestEncryptionZonesWithKMS |
   |   | hadoop.hdfs.web.TestWebHDFSXAttr |
   |   | hadoop.fs.permission.TestStickyBit |
   |   | hadoop.hdfs.TestTrashWithEncryptionZones |
   |   | hadoop.hdfs.server.namenode.TestINodeAttributeProvider |
   |   | hadoop.hdfs.server.namenode.TestNameNodeAcl |
   |   | hadoop.hdfs.TestHDFSTrash |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.7 Server=19.03.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/11/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 1b74628bb0eb 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 44afe11 |
   | Default Java | 1.8.0_242 |
   | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/11/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt |
   | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/11/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt |
   | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/11/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt |
   | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/11/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/11/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
   |  Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/11/testReport/ |
   | Max. process+thread count | 3326 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/11/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] jojochuang commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
jojochuang commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-597983050
 
 
   Test failures due to OOM, unrelated. Triggered a rebuild regardless

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-582693152
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   2m 35s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  27m 14s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 33s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m  6s |  branch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m 43s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 38s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  the patch passed  |
   | -1 :x: |  javac  |   1m 13s |  hadoop-hdfs-project_hadoop-hdfs generated 5 new + 580 unchanged - 0 fixed = 585 total (was 580)  |
   | -0 :warning: |  checkstyle  |   0m 50s |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 47 new + 245 unchanged - 0 fixed = 292 total (was 245)  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 51s |  patch has no errors when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   1m 18s |  hadoop-hdfs-project_hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  findbugs  |   3m  2s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 109m 18s |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  The patch does not generate ASF License warnings.  |
   |  |   | 194m 31s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/2/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 25fc7bd1eabb 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 314e2f9 |
   | Default Java | 1.8.0_232 |
   | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/2/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt |
   | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/2/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt |
   | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/2/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
   |  Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/2/testReport/ |
   | Max. process+thread count | 2751 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#discussion_r391187769
 
 

 ##########
 File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuthorizationContext.java
 ##########
 @@ -0,0 +1,167 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import org.apache.hadoop.ipc.CallerContext;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+
+import static junit.framework.TestCase.assertEquals;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+public class TestAuthorizationContext {
+
+  private String fsOwner = "hdfs";
+  private String superGroup = "hdfs";
+  private UserGroupInformation ugi = UserGroupInformation.
+      createUserForTesting(fsOwner, new String[] {superGroup});
+
+  private INodeAttributes[] emptyINodeAttributes = new INodeAttributes[] {};
+  private INodesInPath iip = mock(INodesInPath.class);
+  private int snapshotId = 0;
+  private INode[] inodes = new INode[] {};
+  private byte[][] components = new byte[][] {};
+  private String path = "";
+  private int ancestorIndex = inodes.length - 2;
+
+  @Before
+  public void setUp() throws IOException {
+    when(iip.getPathSnapshotId()).thenReturn(snapshotId);
+    when(iip.getINodesArray()).thenReturn(inodes);
+    when(iip.getPathComponents()).thenReturn(components);
+    when(iip.getPath()).thenReturn(path);
+  }
+
+  @Test
+  public void testBuilder() {
+    String opType = "test";
+    CallerContext.setCurrent(new CallerContext.Builder(
+        "TestAuthorizationContext").build());
+
+    INodeAttributeProvider.AuthorizationContext.Builder builder =
+        new INodeAttributeProvider.AuthorizationContext.Builder();
+    builder.fsOwner(fsOwner).
+        supergroup(superGroup).
+        callerUgi(ugi).
+        inodeAttrs(emptyINodeAttributes).
+        inodes(inodes).
+        pathByNameArr(components).
+        snapshotId(snapshotId).
+        path(path).
+        ancestorIndex(ancestorIndex).
+        doCheckOwner(true).
+        ancestorAccess(null).
+        parentAccess(null).
+        access(null).
+        subAccess(null).
+        ignoreEmptyDir(true).
+        operationName(opType).
+        callerContext(CallerContext.getCurrent());
+
+    INodeAttributeProvider.AuthorizationContext authzContext = builder.build();
+    assertEquals(authzContext.getFsOwner(), fsOwner);
+    assertEquals(authzContext.getSupergroup(), superGroup);
+    assertEquals(authzContext.getCallerUgi(), ugi);
+    assertEquals(authzContext.getInodeAttrs(), emptyINodeAttributes);
+    assertEquals(authzContext.getInodes(), inodes);
+    assertEquals(authzContext.getPathByNameArr(), components);
+    assertEquals(authzContext.getSnapshotId(), snapshotId);
+    assertEquals(authzContext.getPath(), path);
+    assertEquals(authzContext.getAncestorIndex(), ancestorIndex);
+    assertEquals(authzContext.getOperationName(), opType);
+    assertEquals(authzContext.getCallerContext(), CallerContext.getCurrent());
+  }
+
+  @Test
+  public void testLegacyAPI() throws IOException {
+    INodeAttributeProvider.AccessControlEnforcer
+        mockEnforcer = mock(INodeAttributeProvider.AccessControlEnforcer.class);
+    INodeAttributeProvider mockINodeAttributeProvider =
+        mock(INodeAttributeProvider.class);
+    when(mockINodeAttributeProvider.getExternalAccessControlEnforcer(any())).
+        thenReturn(mockEnforcer);
+
+    FSPermissionChecker checker = new FSPermissionChecker(
+        fsOwner, superGroup, ugi, mockINodeAttributeProvider);
 
 Review comment:
   NIT: do we have a test case when the attributeProvider=null? 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] jojochuang commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
jojochuang commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-610406367
 
 
   Hi @tasanuma do you have RBF?
   I don't see this message in my test cluster (we don't have RBF) and wonder if it comes from RBF.
   
   Not familiar with RBF but if each RPC call results in one permission checker object I can see why there are many messages.
   ` /**
      * Get a new permission checker used for making mount table access
      * control. This method will be invoked during each RPC call in router
      * admin server.
      *
      * @return Router permission checker.
      * @throws AccessControlException If the user is not authorized.
      */
     public static RouterPermissionChecker getPermissionChecker()
         throws AccessControlException {
       if (!isPermissionEnabled) {
         return null;
       }
   
       try {
         return new RouterPermissionChecker(routerOwner, superGroup,
             NameNode.getRemoteUser());
       } catch (IOException e) {
         throw new AccessControlException(e);
       }
     }`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-589891303
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 12s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m  6s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 27s |  branch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m  1s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 59s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  the patch passed  |
   | -1 :x: |  javac  |   1m  2s |  hadoop-hdfs-project_hadoop-hdfs generated 6 new + 580 unchanged - 0 fixed = 586 total (was 580)  |
   | -0 :warning: |  checkstyle  |   0m 44s |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 55 new + 245 unchanged - 0 fixed = 300 total (was 245)  |
   | +1 :green_heart: |  mvnsite  |   1m  8s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  15m 12s |  patch has no errors when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 39s |  hadoop-hdfs-project_hadoop-hdfs generated 1 new + 100 unchanged - 0 fixed = 101 total (was 100)  |
   | -1 :x: |  findbugs  |   3m  6s |  hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 116m 52s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 22s |  The patch does not generate ASF License warnings.  |
   |  |   | 190m  0s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider$AuthorizationContext defines equals and uses Object.hashCode()  At INodeAttributeProvider.java:Object.hashCode()  At INodeAttributeProvider.java:[lines 211-217] |
   | Failed junit tests | hadoop.hdfs.TestHDFSTrash |
   |   | hadoop.hdfs.TestDeadNodeDetection |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.TestReadStripedFileWithDNFailure |
   |   | hadoop.hdfs.TestSafeModeWithStripedFile |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/7/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f70f77162930 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6f84269 |
   | Default Java | 1.8.0_242 |
   | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/7/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt |
   | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/7/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt |
   | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/7/artifact/out/whitespace-eol.txt |
   | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/7/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt |
   | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/7/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
   |  Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/7/testReport/ |
   | Max. process+thread count | 3276 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-597436999
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 13s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 35s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 11s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 29s |  branch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m  0s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 58s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  the patch passed  |
   | -1 :x: |  javac  |   1m  1s |  hadoop-hdfs-project_hadoop-hdfs generated 6 new + 579 unchanged - 0 fixed = 585 total (was 579)  |
   | -0 :warning: |  checkstyle  |   0m 43s |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 243 unchanged - 0 fixed = 249 total (was 243)  |
   | +1 :green_heart: |  mvnsite  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 54s |  patch has no errors when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 44s |  hadoop-hdfs-project_hadoop-hdfs generated 1 new + 100 unchanged - 0 fixed = 101 total (was 100)  |
   | +1 :green_heart: |  findbugs  |   3m 33s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 117m 45s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  The patch does not generate ASF License warnings.  |
   |  |   | 190m 58s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.7 Server=19.03.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/13/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 5ffedccb1e05 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cf9cf83 |
   | Default Java | 1.8.0_242 |
   | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/13/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt |
   | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/13/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt |
   | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/13/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/13/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
   |  Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/13/testReport/ |
   | Max. process+thread count | 3090 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/13/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] tasanuma commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

Posted by GitBox <gi...@apache.org>.
tasanuma commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-610425722
 
 
   @jojochuang 
   No, I see the logs from namenode in a cluster that doesn't have Routers.
   I also confirmed it on my local laptop.
   https://gist.github.com/tasanuma/c066b0d3cadf3be38b2b6d921d4ae28f
   
   When FileSystem API is executed, it seems to generate the logs.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org