You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@libcloud.apache.org by "Quentin Pradet (JIRA)" <ji...@apache.org> on 2017/05/04 06:42:04 UTC

[jira] [Created] (LIBCLOUD-916) s3.upload_object_via_stream no longer accepts unicode

Quentin Pradet created LIBCLOUD-916:
---------------------------------------

             Summary: s3.upload_object_via_stream no longer accepts unicode
                 Key: LIBCLOUD-916
                 URL: https://issues.apache.org/jira/browse/LIBCLOUD-916
             Project: Libcloud
          Issue Type: Bug
            Reporter: Quentin Pradet


Consider this snippet:

{code}
import io
from libcloud.storage.base import Container
from libcloud.storage.drivers.google_storage import GoogleStorageDriver

driver = GoogleStorageDriver(key='replaceme', secret='replaceme')
container = Container(name='container', driver=driver, extra={})
driver.upload_object_via_stream(io.StringIO(' '), container, 'path')
{code}

In libcloud 1.5, upload_object_via_stream was calling libcloud.utils.files.read_in_chunks who encodes the data in UTF-8. In libcloud 2.0, the unicode data is directly sent to requests which requires a bytes body when sent in chunks.

Should we work around the issue in libcloud or decide that bytes are required in the iterator?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)