You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2019/01/14 16:28:00 UTC

[jira] [Resolved] (HADOOP-16047) Avoid expensive rename when DistCp is writing to S3

     [ https://issues.apache.org/jira/browse/HADOOP-16047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Steve Loughran resolved HADOOP-16047.
-------------------------------------
    Resolution: Duplicate

closing as duplicate...please reattach your proposal there

as noted in that one, it's not just performance: if the rename takes so long that the workers don't get their heartbeat in: disaster

> Avoid expensive rename when DistCp is writing to S3
> ---------------------------------------------------
>
>                 Key: HADOOP-16047
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16047
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: fs/s3, tools/distcp
>            Reporter: Andrew Olson
>            Priority: Major
>
> When writing to an S3-based target, the temp file and rename logic in RetriableFileCopyCommand adds some unnecessary cost to the job, as the rename operation does a server-side copy + delete in S3 [1]. The renames are parallelized across all of the DistCp map tasks, so the severity is mitigated to some extent. However a configuration property to conditionally allow distributed copies to avoid that expense and write directly to the target path would improve performance considerably.
> [1] https://github.com/apache/hadoop/blob/release-3.2.0-RC1/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md#object-stores-vs-filesystems



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org