You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@libcloud.apache.org by "Tomaz Muraus (Resolved) (JIRA)" <ji...@apache.org> on 2011/11/02 00:45:32 UTC

[dev] [jira] [Resolved] (LIBCLOUD-101) Add support for uploading storage objects using an iterator even though driver might not support chunked encoding

     [ https://issues.apache.org/jira/browse/LIBCLOUD-101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Tomaz Muraus resolved LIBCLOUD-101.
-----------------------------------

       Resolution: Fixed
    Fix Version/s:     (was: 0.5.2)

OK, I have finally implemented this feature and now "upload_object_via_stream" also works with the Amazon S3 driver.

If a provider doesn't support chunked transfer encoding and upload_object_via_stream method is used, we first exhaust the generator so we can determine the file size and after that the whole data is sent in the HTTP request.

I will add warnings to the docs so it's clear that in this case the whole file is read and buffered into memory.

Currently this functionality is available in trunk, but it should also be included in the next release (0.6.0) which should be out soon.

There is a chance that the internals will still change a bit (need to do some refactoring), but the public interface will stay the same.

Feedback / testing is welcome.
                
> Add support for uploading storage objects using an iterator even though driver might not support chunked encoding
> -----------------------------------------------------------------------------------------------------------------
>
>                 Key: LIBCLOUD-101
>                 URL: https://issues.apache.org/jira/browse/LIBCLOUD-101
>             Project: Libcloud
>          Issue Type: Improvement
>          Components: Storage
>    Affects Versions: 0.5.0
>            Reporter: Birk Nilson
>            Assignee: Tomaz Muraus
>            Priority: Minor
>              Labels: storage
>             Fix For: 0.6.0
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Currently there are two options for uploading objects to any given storage; StorageDriver.upload_object() and StorageDriver.upload_object_via_stream(). The former strictly requires the object to be written to disk prior to upload since it takes the file path as argument. The latter does support upload given an iterator, but does not work for Amazon S3 since they do not support chunked encoding. After discussing this with Tomaž he and I came to the conclusion that a third method is in order which will take an iterator as argument while not requiring the provider to support chunked encoding.
> The solution is to support an iterator as argument and generate a request by iterating through it directly. However, these will require the object to be kept in memory and depending on the size and available resources it might lead to exhaustion. So it should be noted in the methods documentation that the implementation is responsible for preventing such outcomes; which can be done by preventing the allowed upload size for example.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira