You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2018/07/26 19:29:00 UTC

[jira] [Comment Edited] (HADOOP-15426) S3guard DDB throttle events on reads not being retried

    [ https://issues.apache.org/jira/browse/HADOOP-15426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482686#comment-16482686 ] 

Steve Loughran edited comment on HADOOP-15426 at 7/26/18 7:28 PM:
------------------------------------------------------------------

Looking at this a bit more. The AWS docs say "We automatically handle this". My stack traces says "no they don't"

https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchWriteItem.html

Update: See comment [https://issues.apache.org/jira/browse/HADOOP-15426?focusedCommentId=16558794]. They do throttle, its just you can overload. We do still want to retry stuff ourselves, but it's less critical for backporting


was (Author: stevel@apache.org):
Looking at this a bit more. The AWS docs say "We automatically handle this". My stack traces says "no they don't"

https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchWriteItem.html

> S3guard DDB throttle events on reads not being retried
> ------------------------------------------------------
>
>                 Key: HADOOP-15426
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15426
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.1.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Blocker
>         Attachments: HADOOP-15426-001.patch, Screen Shot 2018-07-24 at 15.16.46.png, Screen Shot 2018-07-25 at 16.22.10.png, Screen Shot 2018-07-25 at 16.28.53.png
>
>
> managed to create on a parallel test run
> {code}
> org.apache.hadoop.fs.s3a.AWSServiceThrottledException: delete on s3a://hwdev-steve-ireland-new/fork-0005/test/existing-dir/existing-file: com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException: The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API. (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ProvisionedThroughputExceededException; Request ID: RDM3370REDBBJQ0SLCLOFC8G43VV4KQNSO5AEMVJF66Q9ASUAAJG): The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API. (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ProvisionedThroughputExceededException; Request ID: RDM3370REDBBJQ0SLCLOFC8G43VV4KQNSO5AEMVJF66Q9ASUAAJG)
> 	at 
> {code}
> We should be able to handle this. 400 "bad things happened" error though, not the 503 from S3.
> h3. We need a retry handler for DDB throttle operations



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org