You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2015/08/21 19:58:46 UTC

[jira] [Commented] (HADOOP-12346) Increase some default timeouts / retries for S3a connector

    [ https://issues.apache.org/jira/browse/HADOOP-12346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14707155#comment-14707155 ] 

Steve Loughran commented on HADOOP-12346:
-----------------------------------------

-1 to any changes to s3n; its used enough that changing things would only cause surprises. Things installed via management tooling can pick up the defaults from those, and the people who do the tools can therefore control what those defaults are. More succinctly "we don't like changing defaults, even when they aren't always the best"

As s3a is newer, that's probably more amenable to change, under the "getting it working completely" category

> Increase some default timeouts / retries for S3a connector
> ----------------------------------------------------------
>
>                 Key: HADOOP-12346
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12346
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>            Reporter: Sean Mackrory
>         Attachments: 0001-HADOOP-12346.-Increase-some-default-timeouts-retries.patch
>
>
> I've been seeing some flakiness in jobs runnings against S3a, both first hand and with other accounts, for which increasing fs.s3a.connection.timeout and fs.s3a.attempts.maximum have been a reliable solution. I propose we increase the defaults.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)