You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "Beibei Zhao (Jira)" <ji...@apache.org> on 2022/12/12 06:21:00 UTC
[jira] [Updated] (HDFS-16868) Audit log duplicate problem when an ACE occurs in FSNamesystem.
[ https://issues.apache.org/jira/browse/HDFS-16868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Beibei Zhao updated HDFS-16868:
-------------------------------
Description:
checkSuperuserPrivilege call logAuditEvent and throw ace when an AccessControlException occurs.
{code:java}
// This method logs operationName without super user privilege.
// It should be called without holding FSN lock.
void checkSuperuserPrivilege(String operationName, String path)
throws IOException {
if (isPermissionEnabled) {
try {
FSPermissionChecker.setOperationType(operationName);
FSPermissionChecker pc = getPermissionChecker();
pc.checkSuperuserPrivilege(path);
} catch(AccessControlException ace){
logAuditEvent(false, operationName, path);
throw ace;
}
}
}
{code}
So it' s callers like metaSave call it like this:
{code:java}
/**
* Dump all metadata into specified file
* @param filename
*/
void metaSave(String filename) throws IOException {
String operationName = "metaSave";
checkSuperuserPrivilege(operationName);
......
try {
......
metaSave(out);
......
}
} finally {
readUnlock(operationName, getLockReportInfoSupplier(null));
}
logAuditEvent(true, operationName, null);
}
{code}
but setQuota catch the ace and log the same msg again, it' s a waste of memory I think:
{code:java}
/**
* Set the namespace quota and storage space quota for a directory.
* See {@link ClientProtocol#setQuota(String, long, long, StorageType)} for the
* contract.
*
* Note: This does not support ".inodes" relative path.
*/
void setQuota(String src, long nsQuota, long ssQuota, StorageType type)
throws IOException {
......
try {
if(!allowOwnerSetQuota) {
checkSuperuserPrivilege(operationName, src);
}
......
} catch (AccessControlException ace) {
logAuditEvent(false, operationName, src);
throw ace;
}
getEditLog().logSync();
logAuditEvent(true, operationName, src);
}
{code}
Maybe we should move the checkSuperuserPrivilege out of the try block.
Summary: Audit log duplicate problem when an ACE occurs in FSNamesystem. (was: FSNameSystem audit log duplicate problem when an ACE occurs.)
> Audit log duplicate problem when an ACE occurs in FSNamesystem.
> ---------------------------------------------------------------
>
> Key: HDFS-16868
> URL: https://issues.apache.org/jira/browse/HDFS-16868
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Beibei Zhao
> Priority: Major
>
> checkSuperuserPrivilege call logAuditEvent and throw ace when an AccessControlException occurs.
> {code:java}
> // This method logs operationName without super user privilege.
> // It should be called without holding FSN lock.
> void checkSuperuserPrivilege(String operationName, String path)
> throws IOException {
> if (isPermissionEnabled) {
> try {
> FSPermissionChecker.setOperationType(operationName);
> FSPermissionChecker pc = getPermissionChecker();
> pc.checkSuperuserPrivilege(path);
> } catch(AccessControlException ace){
> logAuditEvent(false, operationName, path);
> throw ace;
> }
> }
> }
> {code}
> So it' s callers like metaSave call it like this:
> {code:java}
> /**
> * Dump all metadata into specified file
> * @param filename
> */
> void metaSave(String filename) throws IOException {
> String operationName = "metaSave";
> checkSuperuserPrivilege(operationName);
> ......
> try {
> ......
> metaSave(out);
> ......
> }
> } finally {
> readUnlock(operationName, getLockReportInfoSupplier(null));
> }
> logAuditEvent(true, operationName, null);
> }
> {code}
> but setQuota catch the ace and log the same msg again, it' s a waste of memory I think:
> {code:java}
> /**
> * Set the namespace quota and storage space quota for a directory.
> * See {@link ClientProtocol#setQuota(String, long, long, StorageType)} for the
> * contract.
> *
> * Note: This does not support ".inodes" relative path.
> */
> void setQuota(String src, long nsQuota, long ssQuota, StorageType type)
> throws IOException {
> ......
> try {
> if(!allowOwnerSetQuota) {
> checkSuperuserPrivilege(operationName, src);
> }
> ......
> } catch (AccessControlException ace) {
> logAuditEvent(false, operationName, src);
> throw ace;
> }
> getEditLog().logSync();
> logAuditEvent(true, operationName, src);
> }
> {code}
> Maybe we should move the checkSuperuserPrivilege out of the try block.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org