You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Hui Fei (Jira)" <ji...@apache.org> on 2020/11/10 05:03:00 UTC

[jira] [Resolved] (HDFS-15667) Audit log record the unexpected allowed result when delete called

     [ https://issues.apache.org/jira/browse/HDFS-15667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hui Fei resolved HDFS-15667.
----------------------------
       Fix Version/s: 3.4.0
    Target Version/s:   (was: 3.3.1, 3.4.0)
          Resolution: Fixed

> Audit log record the unexpected allowed result when delete called
> -----------------------------------------------------------------
>
>                 Key: HDFS-15667
>                 URL: https://issues.apache.org/jira/browse/HDFS-15667
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs
>    Affects Versions: 3.2.1, 3.4.0
>            Reporter: Baolong Mao
>            Assignee: Baolong Mao
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 3.4.0
>
>         Attachments: screenshot-1.png, screenshot-2.png
>
>          Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> I met this issue if rm root directory, for remove non-root and non-empty directory, toRemovedBlocks isn't null, its toDeleteList size is 0.
>  !screenshot-1.png! 
> when will return null?
> Through this screenshot, we can find that if fileRemoved = -1, then toRemovedBlocks = null
>  !screenshot-2.png! 
> And when deleteAllowed(iip) return false, fileRemoved can be -1,
> {code:java}
>  private static boolean deleteAllowed(final INodesInPath iip) {
>     if (iip.length() < 1 || iip.getLastINode() == null) {
>       if (NameNode.stateChangeLog.isDebugEnabled()) {
>         NameNode.stateChangeLog.debug(
>             "DIR* FSDirectory.unprotectedDelete: failed to remove "
>                 + iip.getPath() + " because it does not exist");
>       }
>       return false;
>     } else if (iip.length() == 1) { // src is the root
>       NameNode.stateChangeLog.warn(
>           "DIR* FSDirectory.unprotectedDelete: failed to remove " +
>               iip.getPath() + " because the root is not allowed to be deleted");
>       return false;
>     }
>     return true;
>   }
> {code}
> Through the code of deleteAllowed, we can find that when src is the root, it can return false.
> So without this PR, when I execute *bin/hdfs dfs -rm -r /*
> I find the confusing auditlog line like following
> 2020-11-05 14:32:53,420 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditMessage(8102)) - allowed=true



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org