You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2022/06/07 14:37:00 UTC
[jira] [Work logged] (HDFS-16533) COMPOSITE_CRC failed between replicated file and striped file.
[ https://issues.apache.org/jira/browse/HDFS-16533?focusedWorklogId=779133&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-779133 ]
ASF GitHub Bot logged work on HDFS-16533:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 07/Jun/22 14:36
Start Date: 07/Jun/22 14:36
Worklog Time Spent: 10m
Work Description: ZanderXu commented on PR #4155:
URL: https://github.com/apache/hadoop/pull/4155#issuecomment-1148763022
@Hexiaoqiao @jojochuang Could you help me review this patch? The failed UTs not caused by this modification, and has been solved in other jira.
Issue Time Tracking
-------------------
Worklog Id: (was: 779133)
Time Spent: 3h 20m (was: 3h 10m)
> COMPOSITE_CRC failed between replicated file and striped file.
> --------------------------------------------------------------
>
> Key: HDFS-16533
> URL: https://issues.apache.org/jira/browse/HDFS-16533
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs, hdfs-client
> Reporter: ZanderXu
> Assignee: ZanderXu
> Priority: Major
> Labels: pull-request-available
> Time Spent: 3h 20m
> Remaining Estimate: 0h
>
> After testing the COMPOSITE_CRC with some random length between replicated file and striped file which has same data with replicated file, it failed.
> Reproduce step like this:
> {code:java}
> @Test(timeout = 90000)
> public void testStripedAndReplicatedFileChecksum2() throws Exception {
> int abnormalSize = (dataBlocks * 2 - 2) * blockSize +
> (int) (blockSize * 0.5);
> prepareTestFiles(abnormalSize, new String[] {stripedFile1, replicatedFile});
> int loopNumber = 100;
> while (loopNumber-- > 0) {
> int verifyLength = ThreadLocalRandom.current()
> .nextInt(10, abnormalSize);
> FileChecksum stripedFileChecksum1 = getFileChecksum(stripedFile1,
> verifyLength, false);
> FileChecksum replicatedFileChecksum = getFileChecksum(replicatedFile,
> verifyLength, false);
> if (checksumCombineMode.equals(ChecksumCombineMode.COMPOSITE_CRC.name())) {
> Assert.assertEquals(stripedFileChecksum1, replicatedFileChecksum);
> } else {
> Assert.assertNotEquals(stripedFileChecksum1, replicatedFileChecksum);
> }
> }
> } {code}
> And after tracing the root cause, `FileChecksumHelper#makeCompositeCrcResult` maybe compute an error `consumedLastBlockLength` when updating checksum for the last block of the fixed length which maybe not the last block in the file.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org