You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2022/12/13 04:25:00 UTC

[jira] [Commented] (HDFS-16868) Fix audit log duplicate issue when an ACE occurs in FSNamesystem.

    [ https://issues.apache.org/jira/browse/HDFS-16868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646417#comment-17646417 ] 

ASF GitHub Bot commented on HDFS-16868:
---------------------------------------

Hexiaoqiao merged PR #5206:
URL: https://github.com/apache/hadoop/pull/5206




> Fix audit log duplicate issue when an ACE occurs in FSNamesystem.
> -----------------------------------------------------------------
>
>                 Key: HDFS-16868
>                 URL: https://issues.apache.org/jira/browse/HDFS-16868
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Beibei Zhao
>            Assignee: Beibei Zhao
>            Priority: Major
>              Labels: pull-request-available
>
> checkSuperuserPrivilege call logAuditEvent and throw ace when an AccessControlException occurs.
> {code:java}
>   // This method logs operationName without super user privilege.
>   // It should be called without holding FSN lock.
>   void checkSuperuserPrivilege(String operationName, String path)
>       throws IOException {
>     if (isPermissionEnabled) {
>       try {
>         FSPermissionChecker.setOperationType(operationName);
>         FSPermissionChecker pc = getPermissionChecker();
>         pc.checkSuperuserPrivilege(path);
>       } catch(AccessControlException ace){
>         logAuditEvent(false, operationName, path);
>         throw ace;
>       }
>     }
>   }
> {code}
> It' s callers like metaSave call it like this: 
> {code:java}
>   /**
>    * Dump all metadata into specified file
>    * @param filename
>    */
>   void metaSave(String filename) throws IOException {
>     String operationName = "metaSave";
>     checkSuperuserPrivilege(operationName);
>     ......
>     try {
>         ......
>         metaSave(out);
>         ......
>       }
>     } finally {
>       readUnlock(operationName, getLockReportInfoSupplier(null));
>     }
>     logAuditEvent(true, operationName, null);
>   }
> {code}
> but setQuota, addCachePool, modifyCachePool, removeCachePool, createEncryptionZone and reencryptEncryptionZone catch the ace and log the same msg again, it' s a waste of memory I think: 
> {code:java}
>   /**
>    * Set the namespace quota and storage space quota for a directory.
>    * See {@link ClientProtocol#setQuota(String, long, long, StorageType)} for the
>    * contract.
>    * 
>    * Note: This does not support ".inodes" relative path.
>    */
>   void setQuota(String src, long nsQuota, long ssQuota, StorageType type)
>       throws IOException {
>     ......
>     try {
>       if(!allowOwnerSetQuota) {
>         checkSuperuserPrivilege(operationName, src);
>       }
>      ......
>     } catch (AccessControlException ace) {
>       logAuditEvent(false, operationName, src);
>       throw ace;
>     }
>     getEditLog().logSync();
>     logAuditEvent(true, operationName, src);
>   }
> {code}
> Maybe we should move the checkSuperuserPrivilege out of the try block as metaSave and other callers do.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org