You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2016/09/21 22:19:20 UTC

[jira] [Comment Edited] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

    [ https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15511368#comment-15511368 ] 

Steve Loughran edited comment on HADOOP-13560 at 9/21/16 10:18 PM:
-------------------------------------------------------------------

Patch 004

* fixed name of fs.s3a.block.output option in core-default and docs. Thanks Rajesh!
* more attempts at managing close() operation rigorously. No evidence this is the cause of the problem rajesh saw though.
* rearranged layout of code in S3ADatablocks so associated classes are adjacent; 
* retry on multipart commit adding sleep statements between retries
* gauges of active block uploads wired up.
* more debug statements
* new Progress log for logging progress @ debug level in s3a. Why? Because logging events every 8KB gets too chatty when debugging many-MB uploads.

test: s3a ireland


was (Author: stevel@apache.org):
Patch 004

* fixed name of fs.s3a.block.output option in core-default and docs. Thanks Rajesh!
* more attempts at managing close() operation rigorously. No evidence this is the cause of the problem rajesh saw though.
* rearranged layout of code in S3ADatablocks so associated classes are adjacent; 
* retry on multipart commit adding sleep statements between retries
* gauges of active block uploads wired up.
* more debug statements
* new Progress log for logging progress @ debug level in s3a. Why? Because logging events every 8KB gets too chatty when debugging many-MB uploads.


> S3ABlockOutputStream to support huge (many GB) file writes
> ----------------------------------------------------------
>
>                 Key: HADOOP-13560
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13560
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.9.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Minor
>         Attachments: HADOOP-13560-branch-2-001.patch, HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, HADOOP-13560-branch-2-004.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org