You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2018/11/03 11:31:01 UTC
[jira] [Commented] (HADOOP-13276) S3a operations keep retrying if
the password is wrong
[ https://issues.apache.org/jira/browse/HADOOP-13276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16674023#comment-16674023 ]
Steve Loughran commented on HADOOP-13276:
-----------------------------------------
Thinking some more here, retrying on noauth can make sense if the credential providers know that there's been a failure and they should try to re-retrieve secrets. In particular, IAM authenticator can do this.
passing the refresh() command all the way down should kick this off: but how to trigger the retry?>
# Invoker could take an interface to a callback to invoke on auth failures before the retry; S3AFS would trigger that refresh of its {{AWSCredentialProviderList }}. Similar to {{org.apache.hadoop.fs.s3a.Invoker.Retried}} but stored during construction
# need to identify all auth failed exceptions as special & translate into a specific exception. 403, probably
# exception translation would need to invoke that iff there was an auth failure.
Taking a Retried callback in the constructor would be trivial; may want to add a new interface tho' which lets the callback actually indicate whether or not to retry. Why so? you could identify when an auth wasn't working as there'd just been a previous attempt. Realistically: only one failure on a callback is going to be recoverable. If, after a cred refresh nothing works, you may as well give up
> S3a operations keep retrying if the password is wrong
> -----------------------------------------------------
>
> Key: HADOOP-13276
> URL: https://issues.apache.org/jira/browse/HADOOP-13276
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 2.8.4
> Reporter: Steve Loughran
> Assignee: Thomas Poepping
> Priority: Minor
>
> If you do a {{hadoop fs}} command with the AWS account valid but the password wrong, it takes a while to timeout, because of retries happening underneath.
> Eventually it gives up, but failing fast would be better.
> # maybe: check the password length and fail if it is not the right length (is there a standard one? Or at least a range?)
> # consider a retry policy which fails faster on signature failures/403 responses
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org