You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Adam Antal (JIRA)" <ji...@apache.org> on 2018/11/06 13:36:00 UTC
[jira] [Commented] (HADOOP-15573) s3guard set-capacity to not retry
on an access denied exception
[ https://issues.apache.org/jira/browse/HADOOP-15573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16676774#comment-16676774 ]
Adam Antal commented on HADOOP-15573:
-------------------------------------
It looks to me that this issue has been fixed in HADOOP-15583 as this piece of code got added to {{S3AUtils.translateDynamoDBException}}:
{code:java}
final int statusCode = ddbException.getStatusCode();
final String errorCode = ddbException.getErrorCode();
IOException result = null;
// 400 gets used a lot by DDB
if (statusCode == 400) {
switch (errorCode) {
case "AccessDeniedException":
result = (IOException) new AccessDeniedException(
path,
null,
ddbException.toString())
.initCause(ddbException);
break;
default:
result = new AWSBadRequestException(message, ddbException);
}
}
{code}
> s3guard set-capacity to not retry on an access denied exception
> ---------------------------------------------------------------
>
> Key: HADOOP-15573
> URL: https://issues.apache.org/jira/browse/HADOOP-15573
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Reporter: Steve Loughran
> Priority: Minor
>
> when you call {{hadoop s3guard set-capacity}} with restricted access, you are (correctly) blocked by AWS, but the client keeps retrying. It should fail fast on a 400/AccessDenied
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org