You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2022/07/15 20:53:00 UTC
[jira] [Work logged] (HDFS-16566) Erasure Coding: Recovery may causes excess replicas when busy DN exsits
[ https://issues.apache.org/jira/browse/HDFS-16566?focusedWorklogId=791571&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-791571 ]
ASF GitHub Bot logged work on HDFS-16566:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 15/Jul/22 20:52
Start Date: 15/Jul/22 20:52
Worklog Time Spent: 10m
Work Description: jojochuang commented on code in PR #4252:
URL: https://github.com/apache/hadoop/pull/4252#discussion_r922506303
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java:
##########
@@ -127,7 +127,7 @@ public void processErasureCodingTasks(
reconInfo.getExtendedBlock(), reconInfo.getErasureCodingPolicy(),
reconInfo.getLiveBlockIndices(), reconInfo.getSourceDnInfos(),
reconInfo.getTargetDnInfos(), reconInfo.getTargetStorageTypes(),
- reconInfo.getTargetStorageIDs());
+ reconInfo.getTargetStorageIDs(), reconInfo.getExcludeReconstructedIndices());
Review Comment:
Probably makes more sense to make a StripedReconstructionInfo constructor that takes BlockECReconstructionInfo as the input parameter.
Let's leave this out as a follow up.
Issue Time Tracking
-------------------
Worklog Id: (was: 791571)
Time Spent: 4h (was: 3h 50m)
> Erasure Coding: Recovery may causes excess replicas when busy DN exsits
> -----------------------------------------------------------------------
>
> Key: HDFS-16566
> URL: https://issues.apache.org/jira/browse/HDFS-16566
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 3.3.2
> Reporter: Ruinan Gu
> Priority: Major
> Labels: pull-request-available
> Time Spent: 4h
> Remaining Estimate: 0h
>
> Simple case:
> RS3-2 ,[0(busy),2,3,4] (1 missing),0 is busy.
> We can get liveblockIndice=[2,3,4], additionalRepl=1.So the DN will get the LiveBitSet=[2,3,4] and targets.length=1.
> According to StripedWriter.initTargetIndices(), 0 will get recovered instead of 1. So the internal blocks will become [0(busy),2,3,4,0'(excess)].Although NN will detect, delete the excess replicas and recover the missing block(1) correctly after the wrong recovery of 0', I don't think this process is expected and the recovery of 0' is obviously wrong and not necessary.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org