You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2018/08/29 17:23:00 UTC

[jira] [Commented] (HADOOP-14483) increase default value of fs.s3a.multipart.size to 128M

    [ https://issues.apache.org/jira/browse/HADOOP-14483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16596621#comment-16596621 ] 

Steve Loughran commented on HADOOP-14483:
-----------------------------------------

I'm going to close as WONTFIX. There are some subtle changes if you do expand the size, particularly the time to close() a file can increase, as you can have (128 x 2^20 - 1) bytes waiting for upload. Safest to leave as is and let people tune

> increase default value of fs.s3a.multipart.size to 128M
> -------------------------------------------------------
>
>                 Key: HADOOP-14483
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14483
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.8.0
>            Reporter: Steve Loughran
>            Priority: Minor
>
> increment the default value of {{fs.s3a.multipart.size}} from "100M" to "128M".
> Why? AWS S3 throttles clients making too many requests; going to a larger size will reduce this. Also: document the issue



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org