You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Shuyan Zhang (Jira)" <ji...@apache.org> on 2023/03/24 10:23:00 UTC
[jira] [Created] (HDFS-16964) Improve processing of excess redundancy after failover
Shuyan Zhang created HDFS-16964:
-----------------------------------
Summary: Improve processing of excess redundancy after failover
Key: HDFS-16964
URL: https://issues.apache.org/jira/browse/HDFS-16964
Project: Hadoop HDFS
Issue Type: Improvement
Reporter: Shuyan Zhang
After failover, the block with excess redundancy cannot be processed until all replicas are not stale, because the stale ones may have been deleted. That is to say, we need to wait for the FBRs of all datanodes on which the block resides before deleting the redundant replicas. This is unnecessary, we can bypass stale replicas when dealing with excess replicas, and delete non-stale excess replicas in a more timely manner.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org