You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2018/10/09 15:02:01 UTC
[jira] [Created] (HADOOP-15834) Improve throttling on S3Guard DDB
batch retries
Steve Loughran created HADOOP-15834:
---------------------------------------
Summary: Improve throttling on S3Guard DDB batch retries
Key: HADOOP-15834
URL: https://issues.apache.org/jira/browse/HADOOP-15834
Project: Hadoop Common
Issue Type: Sub-task
Components: fs/s3
Affects Versions: 3.2.0
Reporter: Steve Loughran
the batch throttling may fail too fast
if there's batch update of 25 writes but the default retry count is nine attempts, only nine writes of the batch may be attempted...even if each attempt is actually successfully writing data.
In contrast, a single write of a piece of data gets the same no. of attempts, so 25 individual writes can handle a lot more throttling than a bulk write.
Proposed: retry logic to be more forgiving of batch writes, such as not consider a batch call where at least one data item was written to count as a failure
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org