You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@crunch.apache.org by "Josh Wills (JIRA)" <ji...@apache.org> on 2015/12/10 07:31:10 UTC

[jira] [Resolved] (CRUNCH-580) FileTargetImpl#handleOutputs Inefficiency on S3NativeFileSystem

     [ https://issues.apache.org/jira/browse/CRUNCH-580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Josh Wills resolved CRUNCH-580.
-------------------------------
       Resolution: Fixed
    Fix Version/s: 0.14.0

Pushed to master. Thanks Jeff!

> FileTargetImpl#handleOutputs Inefficiency on S3NativeFileSystem
> ---------------------------------------------------------------
>
>                 Key: CRUNCH-580
>                 URL: https://issues.apache.org/jira/browse/CRUNCH-580
>             Project: Crunch
>          Issue Type: Bug
>          Components: Core, IO
>    Affects Versions: 0.13.0
>         Environment: Amazon Elastic Map Reduce
>            Reporter: Jeffrey Quinn
>            Assignee: Josh Wills
>             Fix For: 0.14.0
>
>         Attachments: CRUNCH-580.patch, CRUNCH-580.patch
>
>
> We have run in to a pretty frustrating inefficiency inside of org.apache.crunch.io.impl.FileTargetImpl#handleOutputs.
> This method loops over all of the partial output files and moves them to their ultimate destination directories, calling org.apache.hadoop.fs.FileSystem#rename(org.apache.hadoop.fs.Path, org.apache.hadoop.fs.Path) on each partial output in a loop.
> This is no problem when the org.apache.hadoop.fs.FileSystem in question is HDFS where #rename is a cheap operation, but when an implementation such as S3NativeFileSystem is used it is extremely inefficient, as each iteration through the loop makes a single blocking S3 API call, and this loop can be extremely long when there are many thousands of partial output files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)