You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2019/08/07 16:33:00 UTC

[jira] [Updated] (HADOOP-16499) S3A retry policy to be exponential

     [ https://issues.apache.org/jira/browse/HADOOP-16499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Steve Loughran updated HADOOP-16499:
------------------------------------
    Release Note: The S3A filesystem now backs off exponentially on failures. if you have customized the fs.s3a.retry.limit and fs.s3a.retry.interval options you may wish to review these settings

> S3A retry policy to be exponential
> ----------------------------------
>
>                 Key: HADOOP-16499
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16499
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.2.0, 3.1.2
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Critical
>
> the fixed s3a retry policy doesnt leave big enough gaps for cached 404s to expire; we cant recover from this
> HADOOP-16490 is a full fix for this, but one we can backport is moving from fixed to exponential retries



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org