You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2014/10/09 19:13:34 UTC

[jira] [Commented] (HADOOP-11183) Memory-based S3AOutputstream

    [ https://issues.apache.org/jira/browse/HADOOP-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14165386#comment-14165386 ] 

Steve Loughran commented on HADOOP-11183:
-----------------------------------------

given that the cached-to-hdd data is lost if the client fails, there's no enhanced risk to data integrity here —it even improved cleanup.

It will require a JVM set up with many GB of buffer space though -correct? Or does the AWS API add an append operation for incremental writes?

> Memory-based S3AOutputstream
> ----------------------------
>
>                 Key: HADOOP-11183
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11183
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: fs/s3
>    Affects Versions: 2.6.0
>            Reporter: Thomas Demoor
>
> Currently s3a buffers files on disk(s) before uploading. This JIRA investigates adding a memory-based upload implementation.
> The motivation is evidently performance: this would be beneficial for users with high network bandwidth to S3 (EC2?) or users that run Hadoop directly on an S3-compatible object store (FYI: my contributions are made in name of Amplidata). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)