You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@avro.apache.org by "Hudson (JIRA)" <ji...@apache.org> on 2013/05/10 02:09:18 UTC

[jira] [Commented] (AVRO-1326) Files written with bzip2 codec cannot be read

    [ https://issues.apache.org/jira/browse/AVRO-1326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13653408#comment-13653408 ] 

Hudson commented on AVRO-1326:
------------------------------

Integrated in AvroJava #372 (See [https://builds.apache.org/job/AvroJava/372/])
    AVRO-1326. Java: Fix bug in BZip2 codec. (Revision 1480771)

     Result = SUCCESS
cutting : 
Files : 
* /avro/trunk/CHANGES.txt
* /avro/trunk/lang/java/avro/src/main/java/org/apache/avro/file/BZip2Codec.java
* /avro/trunk/lang/java/avro/src/test/java/org/apache/avro/file/TestBZip2Codec.java

                
> Files written with bzip2 codec cannot be read
> ---------------------------------------------
>
>                 Key: AVRO-1326
>                 URL: https://issues.apache.org/jira/browse/AVRO-1326
>             Project: Avro
>          Issue Type: Bug
>          Components: java
>    Affects Versions: 1.7.4
>            Reporter: Kevin Irwin
>            Assignee: Doug Cutting
>            Priority: Critical
>             Fix For: 1.7.5
>
>         Attachments: AVRO-1326.patch, BzipTest.java
>
>
> When attempting to read a file written using the bzip2 codec for compression, the following exception is thrown upon completion of the first encoded block:
> Exception in thread "main" org.apache.avro.AvroRuntimeException: java.io.IOException: Block read partially, the data may be corrupt
> 	at org.apache.avro.file.DataFileStream.hasNext(DataFileStream.java:210)
> 	at BzipTests.main(BzipTests.java:28)
> Caused by: java.io.IOException: Block read partially, the data may be corrupt
> 	at org.apache.avro.file.DataFileStream.hasNext(DataFileStream.java:194)
> 	... 1 more
> An inspection of BZip2Codec indicates the root cause is in the compress() method. The entire supplied ByteBuffer is compressed, not just the valid portion of the buffer.  On decompress, the resultant length is then larger than the recorded uncompressed block size.
> On line 51:
> outputStream.write(uncompressedData.array());
> should be:
> outputStream.write(uncompressedData.array(), uncompressedData.position(), uncompressedData.remaining());

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira