You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@libcloud.apache.org by "John Carr (JIRA)" <ji...@apache.org> on 2013/08/21 22:15:52 UTC

[jira] [Commented] (LIBCLOUD-378) S3 uploads fail on small iterators

    [ https://issues.apache.org/jira/browse/LIBCLOUD-378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746792#comment-13746792 ] 

John Carr commented on LIBCLOUD-378:
------------------------------------

I think this is because read_chunks has fill_size=False by default. Reading the docstring we should set it to True for S3 multi-part uploads to work in my case.

It would be nice to fall back to the non-multi-part API's if the first chunk was less than 5mb - but i dont know how hard it would be to refactor the code like that.
                
> S3 uploads fail on small iterators
> ----------------------------------
>
>                 Key: LIBCLOUD-378
>                 URL: https://issues.apache.org/jira/browse/LIBCLOUD-378
>             Project: Libcloud
>          Issue Type: Bug
>          Components: Storage
>            Reporter: John Carr
>
> I wrote a small script that uploaded the output of a buildbot job and then updated an XML file. The large binary blob worked fine. However the XML file failed.
> I was using the driver.upload_object_via_stream(iterator=StringIO.StringIO(somexml)) style as in the docs.
> Looking at the LIBCLOUD_DEBUG output the driver was using the S3 multi-part upload API and making a new "part" for each line - so every 7 bytes or so - but the minimum size for a part upload was 5mb.
> (I don't know if the first part is allowed to be less than 5mb if the entire upload is less than 5mb).
> I am working around this by forcing multi-part uploads off.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira