You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@libcloud.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2015/05/13 21:09:00 UTC
[jira] [Commented] (LIBCLOUD-711) Periodic GZIP CRC check failure
[ https://issues.apache.org/jira/browse/LIBCLOUD-711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14542472#comment-14542472 ]
ASF GitHub Bot commented on LIBCLOUD-711:
-----------------------------------------
GitHub user chrisob opened a pull request:
https://github.com/apache/libcloud/pull/519
[LIBCLOUD-711] Fixed occasional CRC check failure when decompressing …
…large responses
fixes issue: https://issues.apache.org/jira/browse/LIBCLOUD-711
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/chrisob/libcloud LIBCLOUD-711_gzip_crc_check_fail_fix
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/libcloud/pull/519.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #519
----
commit e931f2b9fd19a1d2b16e25ee44c96acb768371dc
Author: Chris O'Brien <ch...@gmail.com>
Date: 2015-05-13T19:07:50Z
[LIBCLOUD-711] Fixed occasional CRC check failure when decompressing large responses
----
> Periodic GZIP CRC check failure
> -------------------------------
>
> Key: LIBCLOUD-711
> URL: https://issues.apache.org/jira/browse/LIBCLOUD-711
> Project: Libcloud
> Issue Type: Bug
> Environment: Python 2.6.6
> Reporter: Chris O'Brien
>
> When attempting to parse a gzipped server response, occasionally a CRC check fails while decompressing the response body. Most of the time the response is correctly decompressed and parsed.
> Although the compressed data is complete (verified by writing data to file and gunzipping it), this issue only seems to happen with chunked responses.
> I believe this is due to the fact that the response body is unusually large (~43K uncompressed).
> I'm using the CloudSigma driver, and this specifically happens with the list_nodes method (the one which responds with the largest amount of data):
> {noformat}
> IOError: CRC check failed 0xdebd5ac != 0x42c31c02L
> ...
> File "/home/dbs_support/dev/libcloudsigma/test.py", line 298, in _get_node
> nodes = self.cloud_driver.list_nodes()
> File "/home/dbs_support/lib/libcloud/compute/drivers/cloudsigma.py", line 1025, in list_nodes
> response = self.connection.request(action=action, method='GET').object
> File "/home/dbs_support/lib/libcloud/compute/drivers/cloudsigma.py", line 965, in request
> raw=raw)
> File "/home/dbs_support/lib/libcloud/common/base.py", line 750, in request
> 'response': self.connection.getresponse()}
> File "/home/dbs_support/lib/libcloud/common/base.py", line 404, in getresponse
> r, rv = self._log_response(r)
> File "/home/dbs_support/lib/libcloud/common/base.py", line 311, in _log_response
> body = decompress_data('gzip', body)
> File "/home/dbs_support/lib/libcloud/utils/compression.py", line 39, in decompress_data
> return gzip.GzipFile(fileobj=cls(data)).read()
> File "/usr/lib64/python2.6/gzip.py", line 212, in read
> self._read(readsize)
> File "/usr/lib64/python2.6/gzip.py", line 267, in _read
> self._read_eof()
> File "/usr/lib64/python2.6/gzip.py", line 304, in _read_eof
> hex(self.crc)))
> {noformat}
> In utils/compression.py, would it be wise to replace line #39 ({{return gzip.GzipFile(fileobj=cls(data)).read()}}) to use zlib? This seems to fix the problem for me, but I'm unaware if there are any negative impacts.
> For example:
> {noformat}
> decomp = zlib.decompressobj(16+zlib.MAX_WBITS)
> return decomp.decompress(data)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)