You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@jclouds.apache.org by "Nikola Knezevic (JIRA)" <ji...@apache.org> on 2015/03/24 17:28:53 UTC

[jira] [Comment Edited] (JCLOUDS-769) Upload blob from stream

    [ https://issues.apache.org/jira/browse/JCLOUDS-769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14378097#comment-14378097 ] 

Nikola Knezevic edited comment on JCLOUDS-769 at 3/24/15 4:28 PM:
------------------------------------------------------------------

So, I made some rudimentary, local support for S3 (and Swift) streaming by implementing my own SequentialMPUStrategy, and injecting it. However, it is far from satisfactory and looks like a hack. This would be impossible to do w/o Sequential strategy. Now, I could push this code for a review, but I think in order to include it, we first need to change what jclouds expose to the user.


was (Author: knl):
So, I mad some rudimentary, local support for S3 streaming by implementing my own SequentialMPUStrategy, and injecting it. However, it is far from satisfactory and looks like a hack. This would be impossible to do w/o Sequential strategy. Now, I could push this code for a review, but I think in order to include it, we first need to change what jclouds expose to the user.

> Upload blob from stream
> -----------------------
>
>                 Key: JCLOUDS-769
>                 URL: https://issues.apache.org/jira/browse/JCLOUDS-769
>             Project: jclouds
>          Issue Type: New Feature
>          Components: jclouds-blobstore
>    Affects Versions: 1.8.1
>            Reporter: Akos Hajnal
>              Labels: multipart, s3
>
> Dear Developers,
> It was not easy, but using S3 API, it was possible to upload a large blob from stream - without knowing its size in advance (and storing all the data locally). I found solutions using jclouds' aws-s3 specific API (some async interface), but I really miss this feature from jclouds' general API.
> My dream is to have a method like:
> blob.getOutputStream() into which I can write as many data as I want, 
> which pushes data to the storage simultaneously until I close the stream.
> (When I used S3, I created a wrapper class extending OutputStream, which initiates multipart upload, buffers data written to the output stream, writes a part when the buffer is full, and finalizes multipart upload on stream close.) 
> I don't know it is possible for all providers, but I really miss it...
> Thank you,
> Akos Hajnal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)