You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2019/07/12 14:02:00 UTC

[jira] [Created] (HADOOP-16430) S3AFilesystem.delete to incrementally update s3guard with deletions

Steve Loughran created HADOOP-16430:
---------------------------------------

             Summary: S3AFilesystem.delete to incrementally update s3guard with deletions
                 Key: HADOOP-16430
                 URL: https://issues.apache.org/jira/browse/HADOOP-16430
             Project: Hadoop Common
          Issue Type: Sub-task
          Components: fs/s3
            Reporter: Steve Loughran


Currently S3AFilesystem.delete() only updates the delete at the end of a paged delete operation. This makes it slow when there are many thousands of files to delete ,and increases the window of vulnerability to failures

Preferred

* after every bulk DELETE call is issued to S3, queue the (async) delete of all entries in that post.
* at the end of the delete, await the completion of these operations.
* inside S3AFS, also do the delete across threads, so that different HTTPS connections can be used.

This should maximise DDB throughput against tables which aren't IO limited.

When executed against small IOP limited tables, the parallel DDB DELETE batches will trigger a lot of throttling events; we should make sure these aren't going to trigger failures



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-dev-help@hadoop.apache.org