You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "farmmamba (Jira)" <ji...@apache.org> on 2023/07/26 15:29:00 UTC

[jira] [Created] (HDFS-17126) FsDataSetImpl#checkAndUpdate should delete duplicated block meta file.

farmmamba created HDFS-17126:
--------------------------------

             Summary: FsDataSetImpl#checkAndUpdate should delete duplicated block meta file.
                 Key: HDFS-17126
                 URL: https://issues.apache.org/jira/browse/HDFS-17126
             Project: Hadoop HDFS
          Issue Type: Improvement
          Components: datanode
    Affects Versions: 3.4.0
            Reporter: farmmamba


Think about below case :

We have one datanode called dn1,   it has two storages call ds1 and ds2 respectively.

Suppose we have blk_123 and blk_123_1001.meta in ds1 and blk_123_1001.meta in ds2.

The current logic will not handle the file blk_123_1001.meta in ds2 and only prints logs to tell us DirectoryScanner scan missing block files.

I think we should do something here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org