You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Akira Ajisaka (Jira)" <ji...@apache.org> on 2021/06/09 08:41:00 UTC

[jira] [Created] (HDFS-16059) dfsadmin -listOpenFiles -blockingDecommission can miss some files

Akira Ajisaka created HDFS-16059:
------------------------------------

             Summary: dfsadmin -listOpenFiles -blockingDecommission can miss some files
                 Key: HDFS-16059
                 URL: https://issues.apache.org/jira/browse/HDFS-16059
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: dfsadmin
            Reporter: Akira Ajisaka


While reviewing HDFS-13671, I found "dfsadmin -listOpenFiles -blockingDecommission" can drop some files.

[https://github.com/apache/hadoop/pull/3065#discussion_r647396463]
{quote}If the DataNodes have the following open files and we want to list all the open files:

DN1: [1001, 1002, 1003, ... , 2000]
 DN2: [1, 2, 3, ... , 1000]

At first getFilesBlockingDecom(0, "/") is called and it returns [1001, 1002, ... , 2000] because it reached max size (=1000), and next getFilesBlockingDecom(2000, "/") is called because the last inode Id of the previous result is 2000. That way the open files of DN2 is missed
{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org