You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Hairong Kuang (JIRA)" <ji...@apache.org> on 2010/12/03 08:14:11 UTC

[jira] Created: (HDFS-1526) Dfs client name for a map/reduce task should have some randomness

Dfs client name for a map/reduce task should have some randomness
-----------------------------------------------------------------

                 Key: HDFS-1526
                 URL: https://issues.apache.org/jira/browse/HDFS-1526
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: hdfs client
            Reporter: Hairong Kuang
            Assignee: Hairong Kuang
             Fix For: 0.23.0


Fsck shows one of the files in our dfs cluster is corrupt.

# /bin/hadoop fsck aFile -files -blocks -locations
aFile: 4633 bytes, 2 block(s): 
aFile: CORRUPT block blk_-4597378336099313975
OK
0. blk_-4597378336099313975_2284630101 len=0 repl=3 [...]
1. blk_5024052590403223424_2284630107 len=4633 repl=3 [...]Status: CORRUPT

On disk, these two blocks are of the same size and the same content. It turns out the writer of the file is from a multiple threaded map task. Each thread may write to the same file. One possible interaction of two threads might make this to happen:
[T1: create aFile] [T2: delete aFile] [T2: create aFile][T1: addBlock 0 to aFile][T2: addBlock1 to aFile]...

Because T1 and T2 have the same client name, which is the map task id, the above interactions could be done without any lease exception, thus eventually leading to a corrupt file. To solve the problem, a mapreduce task's client name could be formed by its task id followed by a random number.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.