You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Vinay (JIRA)" <ji...@apache.org> on 2014/01/08 05:36:52 UTC

[jira] [Created] (HDFS-5728) [Diskfull] Block recovery will fail if the metafile not having crc for all chunks of the block

Vinay created HDFS-5728:
---------------------------

             Summary: [Diskfull] Block recovery will fail if the metafile not having crc for all chunks of the block
                 Key: HDFS-5728
                 URL: https://issues.apache.org/jira/browse/HDFS-5728
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: datanode
    Affects Versions: 2.2.0
            Reporter: Vinay
            Assignee: Vinay


1. Client (regionsever) has opened stream to write its WAL to HDFS. This is not one time upload, data will be written slowly.
2. One of the DataNode got diskfull ( due to some other data filled up disks)
3. Unfortunately block was being written to only this datanode in cluster, so client write has also failed.

4. After some time disk is made free and all processes are restarted.
5. Now HMaster try to recover the file by calling recoverLease. 
At this time recovery was failing saying file length mismatch.

When checked,
 actual block file length: 62484480
 Calculated block length: 62455808

This was because, metafile was having crc for only 62455808 bytes, and it considered 62455808 as the block size.

No matter how many times, recovery was continously failing.




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)