You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "WangYuanben (Jira)" <ji...@apache.org> on 2023/06/02 03:02:00 UTC

[jira] [Updated] (HDFS-15170) EC: Block gets marked as CORRUPT in case of failover and pipeline recovery

     [ https://issues.apache.org/jira/browse/HDFS-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

WangYuanben updated HDFS-15170:
-------------------------------
    Description: 
Steps to Repro :
1. Start writing a EC file.
2. After more than one stripe has been written, stop one datanode.
3. Post pipeline recovery, keep on writing the data.
4.Close the file.
5. transition the namenode to standby and back to active.
6. Turn on the shutdown datanode in step 2

The BR from datanode 2 will make the block corrupt and during invalidate block won't remove it, since post failover the blocks would be on stale storage.

  was:
*粗文本*Steps to Repro :
1. Start writing a EC file.
2. After more than one stripe has been written, stop one datanode.
3. Post pipeline recovery, keep on writing the data.
4.Close the file.
5. transition the namenode to standby and back to active.
6. Turn on the shutdown datanode in step 2

The BR from datanode 2 will make the block corrupt and during invalidate block won't remove it, since post failover the blocks would be on stale storage.


> EC: Block gets marked as CORRUPT in case of failover and pipeline recovery
> --------------------------------------------------------------------------
>
>                 Key: HDFS-15170
>                 URL: https://issues.apache.org/jira/browse/HDFS-15170
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: erasure-coding
>            Reporter: Ayush Saxena
>            Assignee: Ayush Saxena
>            Priority: Critical
>             Fix For: 3.3.1, 3.4.0, 3.2.3
>
>         Attachments: HDFS-15170-01.patch, HDFS-15170-02.patch, HDFS-15170-03.patch, HDFS-15170-04.patch
>
>
> Steps to Repro :
> 1. Start writing a EC file.
> 2. After more than one stripe has been written, stop one datanode.
> 3. Post pipeline recovery, keep on writing the data.
> 4.Close the file.
> 5. transition the namenode to standby and back to active.
> 6. Turn on the shutdown datanode in step 2
> The BR from datanode 2 will make the block corrupt and during invalidate block won't remove it, since post failover the blocks would be on stale storage.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org