You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "Haiyang Hu (Jira)" <ji...@apache.org> on 2023/02/18 11:03:00 UTC

[jira] [Updated] (HDFS-16899) Fix TestAddOverReplicatedStripedBlocks#testProcessOverReplicatedAndCorruptStripedBlock failed

     [ https://issues.apache.org/jira/browse/HDFS-16899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Haiyang Hu updated HDFS-16899:
------------------------------
    Description: 
TestAddOverReplicatedStripedBlocks#testProcessOverReplicatedAndCorruptStripedBlock occasionally appear failed.

Failed code Line is 
{code:java}
// verify that all internal blocks exists except b0
// the redundant internal blocks will not be deleted before the corrupted
// block gets reconstructed. but since we set
// DFS_NAMENODE_REPLICATION_MAX_STREAMS_KEY to 0, the reconstruction will
// not happen
lbs = cluster.getNameNodeRpc().getBlockLocations(filePath.toString(), 0,
 fileLen);
bg = (LocatedStripedBlock) (lbs.get(0)); 
assertEquals(groupSize + 1, bg.getBlockIndices().length); //here not equals. {code}
From the perspective of normal logic, the internal blocks that need to be obtained are 10, 8 live and 2 redundant internal blocks, 
but due to the processing logic that occasionally triggers invalidate to remove redundant internal block, 
the final obtained internal blocks value does not meet expectations.


So need avoid the redundant internal blocks will be deleted.

> Fix TestAddOverReplicatedStripedBlocks#testProcessOverReplicatedAndCorruptStripedBlock failed
> ---------------------------------------------------------------------------------------------
>
>                 Key: HDFS-16899
>                 URL: https://issues.apache.org/jira/browse/HDFS-16899
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Haiyang Hu
>            Assignee: Haiyang Hu
>            Priority: Major
>
> TestAddOverReplicatedStripedBlocks#testProcessOverReplicatedAndCorruptStripedBlock occasionally appear failed.
> Failed code Line is 
> {code:java}
> // verify that all internal blocks exists except b0
> // the redundant internal blocks will not be deleted before the corrupted
> // block gets reconstructed. but since we set
> // DFS_NAMENODE_REPLICATION_MAX_STREAMS_KEY to 0, the reconstruction will
> // not happen
> lbs = cluster.getNameNodeRpc().getBlockLocations(filePath.toString(), 0,
>  fileLen);
> bg = (LocatedStripedBlock) (lbs.get(0)); 
> assertEquals(groupSize + 1, bg.getBlockIndices().length); //here not equals. {code}
> From the perspective of normal logic, the internal blocks that need to be obtained are 10, 8 live and 2 redundant internal blocks, 
> but due to the processing logic that occasionally triggers invalidate to remove redundant internal block, 
> the final obtained internal blocks value does not meet expectations.
> So need avoid the redundant internal blocks will be deleted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org