You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hc.apache.org by "Reuben Pasquini (JIRA)" <ji...@apache.org> on 2011/06/22 18:48:47 UTC
[jira] [Closed] (HTTPCLIENT-1103) GzipDecompressingEntity (and
therefore ContentEncodingHttpClient) not consistent with
EntityUtils.consumeEntity
[ https://issues.apache.org/jira/browse/HTTPCLIENT-1103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Reuben Pasquini closed HTTPCLIENT-1103.
---------------------------------------
Resolution: Fixed
Fix Version/s: 4.2 Alpha1
4.1.2
Oleg already fixed this in SVN for a previous ticket ...
> GzipDecompressingEntity (and therefore ContentEncodingHttpClient) not consistent with EntityUtils.consumeEntity
> ---------------------------------------------------------------------------------------------------------------
>
> Key: HTTPCLIENT-1103
> URL: https://issues.apache.org/jira/browse/HTTPCLIENT-1103
> Project: HttpComponents HttpClient
> Issue Type: Bug
> Components: HttpClient
> Affects Versions: 4.1.1
> Environment: jdk6
> Reporter: Reuben Pasquini
> Fix For: 4.1.2, 4.2 Alpha1
>
>
> Invoking EntityUtils.consume( entity ) after a previous call to entity.getContent (and subsequent processing of the content) throws a java.io.EOFException when gzip decompression support is enabled via ContentEncodingHttpClient or some similar mechanism. I invoke EntityUtils.consume in a 'finally' block - maybe I'm not using the API correctly ... ?
> java.io.EOFException
> at java.util.zip.GZIPInputStream.readUByte(GZIPInputStream.java:207)
> at java.util.zip.GZIPInputStream.readUShort(GZIPInputStream.java:197)
> at java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:136)
> at java.util.zip.GZIPInputStream.<init>(GZIPInputStream.java:58)
> at java.util.zip.GZIPInputStream.<init>(GZIPInputStream.java:68)
> at org.apache.http.client.entity.GzipDecompressingEntity.getContent(GzipDecompressingEntity.java:63)
> at org.apache.http.conn.BasicManagedEntity.getContent(BasicManagedEntity.java:88)
> at org.apache.http.util.EntityUtils.consume(EntityUtils.java:65)
> I believe the problem is that the underlying DecompressingEntity allocates a new GzipInputStream for each call to getContent, rather than caching the stream created by the first getContent call.
> http://svn.apache.org/repos/asf/httpcomponents/httpclient/trunk/httpclient/src/main/java/org/apache/http/client/entity/DecompressingEntity.java
> The "CustomProtocolInterceptors" example has the same bug: http://hc.apache.org/httpcomponents-client-ga/examples.html
> I worked around the problem implementing the example with my own GzipDecompressingEntity (scala code - lazy value not evaluated till accessed):
> class GzipDecompressingEntity( entity:http.HttpEntity) extends http.entity.HttpEntityWrapper(entity) {
> private lazy val gzipStream = new GZIPInputStream( entity.getContent() )
>
> /**
> * Wrap entity stream in GZIPInputStream
> */
> override def getContent():java.io.InputStream = gzipStream
> /**
> * Return -1 - don't know unzipped content size
> */
> override def getContentLength():Long = -1L
> }
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@hc.apache.org
For additional commands, e-mail: dev-help@hc.apache.org