You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Aaron Fabbri (JIRA)" <ji...@apache.org> on 2018/07/19 01:09:00 UTC

[jira] [Commented] (HADOOP-14757) S3AFileSystem.innerRename() to size metadatastore lists better

    [ https://issues.apache.org/jira/browse/HADOOP-14757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16548635#comment-16548635 ] 

Aaron Fabbri commented on HADOOP-14757:
---------------------------------------

[~abrahamfine] I'll unassign these S3A JIRAs from you, unless you were planning on working on them (shout if so). Thanks!

> S3AFileSystem.innerRename() to size metadatastore lists better
> --------------------------------------------------------------
>
>                 Key: HADOOP-14757
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14757
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.0.0-beta1
>            Reporter: Steve Loughran
>            Assignee: Abraham Fine
>            Priority: Minor
>
> In {{S3AFileSystem.innerRename()}}, various ArrayLists are created to track paths to update; these are created with the default size. It could/should be possible to allocate better, so avoid expensive array growth & copy operations while iterating through the list of entries.
> # for a single file copy, sizes == 1
> # for a recursive copy, the outcome of the first real LIST will either provide the actual size, or, if the list == the max response, a very large minimum size.
> For #2, we'd need to get the hint of iterable length rather than just iterate through...some interface {{{IterableLength.expectedMinimumSize()}} could do that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org