You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2019/05/08 13:43:00 UTC

[jira] [Commented] (HADOOP-15604) Bulk commits of S3A MPUs place needless excessive load on S3 & S3Guard

    [ https://issues.apache.org/jira/browse/HADOOP-15604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835612#comment-16835612 ] 

Steve Loughran commented on HADOOP-15604:
-----------------------------------------

S3Guard.addAncestors() tries to efficiently walk up the tree and only call put() on entries which don't exist, so avoiding that excessive load.

But:  {{metadataStore.put(newDirs)}} goes on to create all the ancestors in {{innerPut(Collection<DDBPathMetadata> metas)}}. That is: it doesn't bother looking for the parent entries, it just blindly tries to create them all. For HADOOP-15183 I'm minimising this across move operations by passing a context around for the {{move()}} calls, I think this same idea somehow needs to be preserved here, but its a lot harder to join up given that its S3AFileSystem.finishedWrite() where this stuff is done and the context is pretty minimal.

> Bulk commits of S3A MPUs place needless excessive load on S3 & S3Guard
> ----------------------------------------------------------------------
>
>                 Key: HADOOP-15604
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15604
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.1.0
>            Reporter: Gabor Bota
>            Assignee: Steve Loughran
>            Priority: Major
>
> When there are ~50 files being committed; each in their own thread from the commit pool; probably the DDB repo is being overloaded just from one single process doing task commit. We should be backing off more, especially given that failing on a write could potentially leave the store inconsistent with the FS (renames, etc)
> It would be nice to have some tests to prove that the I/O thresholds are the reason for unprocessed items in DynamoDB metadata store



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org