You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-dev@hadoop.apache.org by "Jens Rabe (JIRA)" <ji...@apache.org> on 2015/01/15 11:20:34 UTC

[jira] [Created] (MAPREDUCE-6216) Seeking backwards in MapFiles does not always correctly sync the underlying SequenceFile, resulting in "File is corrupt" exceptions

Jens Rabe created MAPREDUCE-6216:
------------------------------------

             Summary: Seeking backwards in MapFiles does not always correctly sync the underlying SequenceFile, resulting in "File is corrupt" exceptions
                 Key: MAPREDUCE-6216
                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6216
             Project: Hadoop Map/Reduce
          Issue Type: Bug
    Affects Versions: 2.4.1
            Reporter: Jens Rabe
            Priority: Critical


In some occasions, when reading MapFiles which were generated by MapFileOutputFormat with BZIP2 BLOCK compression, using getClosest(key, value, true) on the MapFile reader causes an IOException to be thrown with the message "File is corrupt!" When doing "hdfs fsck", it shows that everything is OK, and the underlying data and index files can also be read correctly if read with a SequenceFile.Reader.

The exception happens in the readBlock() method of the SequenceFile.Reader class.

My guess is that, since MapFile.Reader's seekInternal() method does "seek()" instead of "sync()", the indices in the index file must point to "synced" positions. When the exception occurrs, the position the cursor is to be positioned at is not valid.

So I think the culprit is the generation of the index files when MapFiles are output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)