You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "yanbin.zhang (Jira)" <ji...@apache.org> on 2022/02/15 07:18:00 UTC

[jira] [Resolved] (HDFS-16450) Give priority to releasing DNs with less free space

     [ https://issues.apache.org/jira/browse/HDFS-16450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

yanbin.zhang resolved HDFS-16450.
---------------------------------
    Resolution: Done

> Give priority to releasing DNs with less free space
> ---------------------------------------------------
>
>                 Key: HDFS-16450
>                 URL: https://issues.apache.org/jira/browse/HDFS-16450
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs
>    Affects Versions: 3.3.0
>            Reporter: yanbin.zhang
>            Assignee: yanbin.zhang
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: HDFS-16450.001.patch
>
>          Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When deleting redundant replicas, the one with the least free space should be prioritized.
> {code:java}
> //BlockPlacementPolicyDefault#chooseReplicaToDelete
> final DatanodeStorageInfo storage;
> if (oldestHeartbeatStorage != null) {
>   storage = oldestHeartbeatStorage;
> } else if (minSpaceStorage != null) {
>   storage = minSpaceStorage;
> } else {
>   return null;
> }
> excessTypes.remove(storage.getStorageType());
> return storage; {code}
> Change the above logic to the following:
> {code:java}
> //BlockPlacementPolicyDefault#chooseReplicaToDelete
> final DatanodeStorageInfo storage;
> if (minSpaceStorage != null) {
>   storage = minSpaceStorage;
> } else if (oldestHeartbeatStorage != null) {
>   storage = oldestHeartbeatStorage;
> } else {
>   return null;
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org