You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@libcloud.apache.org by to...@apache.org on 2011/11/01 03:25:30 UTC
svn commit: r1195771 - /libcloud/trunk/libcloud/storage/base.py
Author: tomaz
Date: Tue Nov 1 02:25:30 2011
New Revision: 1195771
URL: http://svn.apache.org/viewvc?rev=1195771&view=rev
Log:
Add some more explanation about (future) functionality of
upload_object_via_stream.
Modified:
libcloud/trunk/libcloud/storage/base.py
Modified: libcloud/trunk/libcloud/storage/base.py
URL: http://svn.apache.org/viewvc/libcloud/trunk/libcloud/storage/base.py?rev=1195771&r1=1195770&r2=1195771&view=diff
==============================================================================
--- libcloud/trunk/libcloud/storage/base.py (original)
+++ libcloud/trunk/libcloud/storage/base.py Tue Nov 1 02:25:30 2011
@@ -285,7 +285,7 @@ class StorageDriver(BaseDriver):
def upload_object(self, file_path, container, object_name, extra=None,
verify_hash=True):
"""
- Upload an object.
+ Upload an object currently located on a disk.
@type file_path: C{str}
@param file_path: Path to the object on disk.
@@ -306,6 +306,22 @@ class StorageDriver(BaseDriver):
object_name,
extra=None):
"""
+ Upload an object using an iterator.
+
+ If a provider supports it, chunked transfer encoding is used and you
+ don't need to know in advance the amount of data to be uploaded.
+
+ Otherwise if a provider doesn't support it, iterator will be exhausted
+ so a total size for data to be uploaded can be determined.
+
+ Note: Exhausting the iterator means that the whole data must be buffered
+ in memory which might result in memory exhausting when uploading a very
+ large object.
+
+ If a file is located on a disk you are advised to use upload_object
+ function which uses fs.stat function to determine the file size and it
+ doesn't need to buffer whole object in the memory.
+
@type iterator: C{object}
@param iterator: An object which implements the iterator interface.