You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Hudson (JIRA)" <ji...@apache.org> on 2010/10/26 02:24:31 UTC
[jira] Commented: (HADOOP-6663) BlockDecompressorStream get EOF
exception when decompressing the file compressed from empty file
[ https://issues.apache.org/jira/browse/HADOOP-6663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12924795#action_12924795 ]
Hudson commented on HADOOP-6663:
--------------------------------
Integrated in Hadoop-Common-trunk-Commit #399 (See [https://hudson.apache.org/hudson/job/Hadoop-Common-trunk-Commit/399/])
Reverting HADOOP-6663.
HADOOP-6663. BlockDecompressorStream get EOF exception when decompressing the file compressed from empty file. Contributed by Kang Xiao.
> BlockDecompressorStream get EOF exception when decompressing the file compressed from empty file
> ------------------------------------------------------------------------------------------------
>
> Key: HADOOP-6663
> URL: https://issues.apache.org/jira/browse/HADOOP-6663
> Project: Hadoop Common
> Issue Type: Bug
> Components: io
> Affects Versions: 0.20.2
> Reporter: Kang Xiao
> Assignee: Kang Xiao
> Fix For: 0.22.0
>
> Attachments: BlockDecompressorStream.java.patch, BlockDecompressorStream.java.patch, BlockDecompressorStream.patch, HADOOP-6663.patch
>
>
> An empty file can be compressed using BlockDecompressorStream, which is for block-based compressiong algorithm such as LZO. However, when decompressing the compressed file, BlockDecompressorStream get EOF exception.
> Here is a typical exception stack:
> java.io.EOFException
> at org.apache.hadoop.io.compress.BlockDecompressorStream.rawReadInt(BlockDecompressorStream.java:125)
> at org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:96)
> at org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:82)
> at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:74)
> at java.io.InputStream.read(InputStream.java:85)
> at org.apache.hadoop.util.LineReader.readLine(LineReader.java:134)
> at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:134)
> at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:39)
> at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:186)
> at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:170)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
> at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:18)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:334)
> at org.apache.hadoop.mapred.Child.main(Child.java:196)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.