You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "farmmamba (Jira)" <ji...@apache.org> on 2023/05/11 01:56:00 UTC
[jira] [Resolved] (HDFS-17002) Erasure coding:Generate parity blocks in time to prevent file corruption
[ https://issues.apache.org/jira/browse/HDFS-17002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
farmmamba resolved HDFS-17002.
------------------------------
Assignee: farmmamba
Resolution: Not A Problem
> Erasure coding:Generate parity blocks in time to prevent file corruption
> ------------------------------------------------------------------------
>
> Key: HDFS-17002
> URL: https://issues.apache.org/jira/browse/HDFS-17002
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: erasure-coding
> Affects Versions: 3.4.0
> Reporter: farmmamba
> Assignee: farmmamba
> Priority: Major
>
> In current EC implementation, the corrupted parity block will not be regenerated in time.
> Think about below scene when using RS-6-3-1024k EC policy:
> If three parity blocks p1, p2, p3 are all corrupted or deleted, we are not aware of it.
> Unfortunately, a data block is also corrupted in this time period, then this file will be corrupted and can not be read by decoding.
>
> So, here we should always re-generate parity block in time when it is unhealthy.
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org