You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Tak-Lon (Stephen) Wu (Jira)" <ji...@apache.org> on 2022/06/21 18:41:00 UTC

[jira] [Created] (HADOOP-18310) Add option and make 400 bad request retryable

Tak-Lon (Stephen) Wu created HADOOP-18310:
---------------------------------------------

             Summary: Add option and make 400 bad request retryable
                 Key: HADOOP-18310
                 URL: https://issues.apache.org/jira/browse/HADOOP-18310
             Project: Hadoop Common
          Issue Type: Bug
          Components: fs/s3
    Affects Versions: 3.3.4
            Reporter: Tak-Lon (Stephen) Wu


When one is using a customized credential provider via fs.s3a.aws.credentials.provider, e.g. org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider, when the provided credential by this pluggable provider is expired and return an error code of 400 as bad request exception.

Here, the current S3ARetryPolicy will fail immediately and does not retry on the S3A level. 

Our recent use case in HBase found this use case could lead to a Region Server got immediate abandoned from this Exception without retry, when the file system is trying open or S3AInputStream is trying to reopen the file. especially the S3AInputStream use cases, we cannot find a good way to retry outside of the file system semantic (because if a ongoing stream is failing currently it's considered as irreparable state), and thus we come up with this optional flag for retrying in S3A.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-dev-help@hadoop.apache.org