You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2018/02/02 05:47:00 UTC

[jira] [Commented] (HADOOP-15191) Add Private/Unstable BulkDelete operations to supporting object stores for DistCP

    [ https://issues.apache.org/jira/browse/HADOOP-15191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349818#comment-16349818 ] 

Steve Loughran commented on HADOOP-15191:
-----------------------------------------

HADOOP-15191 Patch 004: directories

+ more logging of what's going on in CopyCommitter
+When a dir is encountered in the target list, it's directly deleted in a recursive call

This is showing that the distcp delete routine is finding & deleting directories, *as well as individually deleting missing entries underneath*. That's suboptimal on anything, aggressively suboptimal on a blobstore. 

The distcp copy committer needs to not bother explicitly deleting any file in a subdirectory which has already been deleted. I believe the sort algorithm implicitly places the dirs ahead of files based on path length alone (am I right?), so could be used to build a map of directories deleted. If a path considered for deletion is a child/dependent of a path already deleted, then it can be skipped. This will save a lot of needless delete calls to any store.

Issue: how to track deleted dirs? Assuming the sort places shorter paths matching patterns first, e.g

/dir
/dir/file
/dir2
/dir2/dir3

Then you'd only need to track the most recently deleted directory, and if the file/dir checked next is under it, skip the delete. But In this listing, I think a /dir23 would appear in the sort list between /dir and /dir2

It might be simplest to have some LRU map of recently deleted dirs


> Add Private/Unstable BulkDelete operations to supporting object stores for DistCP
> ---------------------------------------------------------------------------------
>
>                 Key: HADOOP-15191
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15191
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3, tools/distcp
>    Affects Versions: 2.9.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Major
>         Attachments: HADOOP-15191-001.patch, HADOOP-15191-002.patch, HADOOP-15191-003.patch, HADOOP-15191-004.patch
>
>
> Large scale DistCP with the -delete option doesn't finish in a viable time because of the final CopyCommitter doing a 1 by 1 delete of all missing files. This isn't randomized (the list is sorted), and it's throttled by AWS.
> If bulk deletion of files was exposed as an API, distCP would do 1/1000 of the REST calls, so not get throttled.
> Proposed: add an initially private/unstable interface for stores, {{BulkDelete}} which declares a page size and offers a {{bulkDelete(List<Path>)}} operation for the bulk deletion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org