You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2022/12/12 06:43:00 UTC

[jira] [Commented] (HDFS-16868) Audit log duplicate problem when an ACE occurs in FSNamesystem.

    [ https://issues.apache.org/jira/browse/HDFS-16868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17645935#comment-17645935 ] 

ASF GitHub Bot commented on HDFS-16868:
---------------------------------------

curie71 opened a new pull request, #5206:
URL: https://github.com/apache/hadoop/pull/5206

   …stem.
   
   HDFS-16868 checkSuperuserPrivilege and it' s caller log the same msg when an ACE occurs.
   
   checkSuperuserPrivilege call logAuditEvent and throw ace when an AccessControlException occurs.
   {code:java}
     // This method logs operationName without super user privilege.
     // It should be called without holding FSN lock.
     void checkSuperuserPrivilege(String operationName, String path)
         throws IOException {
       if (isPermissionEnabled) {
         try {
           FSPermissionChecker.setOperationType(operationName);
           FSPermissionChecker pc = getPermissionChecker();
           pc.checkSuperuserPrivilege(path);
         } catch(AccessControlException ace){
           logAuditEvent(false, operationName, path);
           throw ace;
         }
       }
     }
   {code}
   It' s callers like metaSave call it like this: 
   {code:java}
     /**
      * Dump all metadata into specified file
      * @param filename
      */
     void metaSave(String filename) throws IOException {
       String operationName = "metaSave";
       checkSuperuserPrivilege(operationName);
       ......
       try {
           ......
           metaSave(out);
           ......
         }
       } finally {
         readUnlock(operationName, getLockReportInfoSupplier(null));
       }
       logAuditEvent(true, operationName, null);
     }
   {code}
   but setQuota, addCachePool, modifyCachePool, removeCachePool, createEncryptionZone and reencryptEncryptionZone catch the ace and log the same msg again, it' s a waste of memory I think: 
   
   {code:java}
     /**
      * Set the namespace quota and storage space quota for a directory.
      * See {@link ClientProtocol#setQuota(String, long, long, StorageType)} for the
      * contract.
      * 
      * Note: This does not support ".inodes" relative path.
      */
     void setQuota(String src, long nsQuota, long ssQuota, StorageType type)
         throws IOException {
       ......
       try {
         if(!allowOwnerSetQuota) {
           checkSuperuserPrivilege(operationName, src);
         }
        ......
       } catch (AccessControlException ace) {
         logAuditEvent(false, operationName, src);
         throw ace;
       }
       getEditLog().logSync();
       logAuditEvent(true, operationName, src);
     }
   {code}
   Maybe we should move the checkSuperuserPrivilege out of the try block as metaSave and other callers do.
   
   
   <!--
     Thanks for sending a pull request!
       1. If this is your first time, please read our contributor guidelines: https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
       2. Make sure your PR title starts with JIRA issue id, e.g., 'HADOOP-17799. Your PR title ...'.
   -->
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files?
   
   




> Audit log duplicate problem when an ACE occurs in FSNamesystem.
> ---------------------------------------------------------------
>
>                 Key: HDFS-16868
>                 URL: https://issues.apache.org/jira/browse/HDFS-16868
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Beibei Zhao
>            Priority: Major
>
> checkSuperuserPrivilege call logAuditEvent and throw ace when an AccessControlException occurs.
> {code:java}
>   // This method logs operationName without super user privilege.
>   // It should be called without holding FSN lock.
>   void checkSuperuserPrivilege(String operationName, String path)
>       throws IOException {
>     if (isPermissionEnabled) {
>       try {
>         FSPermissionChecker.setOperationType(operationName);
>         FSPermissionChecker pc = getPermissionChecker();
>         pc.checkSuperuserPrivilege(path);
>       } catch(AccessControlException ace){
>         logAuditEvent(false, operationName, path);
>         throw ace;
>       }
>     }
>   }
> {code}
> It' s callers like metaSave call it like this: 
> {code:java}
>   /**
>    * Dump all metadata into specified file
>    * @param filename
>    */
>   void metaSave(String filename) throws IOException {
>     String operationName = "metaSave";
>     checkSuperuserPrivilege(operationName);
>     ......
>     try {
>         ......
>         metaSave(out);
>         ......
>       }
>     } finally {
>       readUnlock(operationName, getLockReportInfoSupplier(null));
>     }
>     logAuditEvent(true, operationName, null);
>   }
> {code}
> but setQuota, addCachePool, modifyCachePool, removeCachePool, createEncryptionZone and reencryptEncryptionZone catch the ace and log the same msg again, it' s a waste of memory I think: 
> {code:java}
>   /**
>    * Set the namespace quota and storage space quota for a directory.
>    * See {@link ClientProtocol#setQuota(String, long, long, StorageType)} for the
>    * contract.
>    * 
>    * Note: This does not support ".inodes" relative path.
>    */
>   void setQuota(String src, long nsQuota, long ssQuota, StorageType type)
>       throws IOException {
>     ......
>     try {
>       if(!allowOwnerSetQuota) {
>         checkSuperuserPrivilege(operationName, src);
>       }
>      ......
>     } catch (AccessControlException ace) {
>       logAuditEvent(false, operationName, src);
>       throw ace;
>     }
>     getEditLog().logSync();
>     logAuditEvent(true, operationName, src);
>   }
> {code}
> Maybe we should move the checkSuperuserPrivilege out of the try block as metaSave and other callers do.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org