You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Ramkumar Vadali (JIRA)" <ji...@apache.org> on 2010/07/01 00:30:51 UTC

[jira] Updated: (MAPREDUCE-1838) DistRaid map tasks have large variance in running times

     [ https://issues.apache.org/jira/browse/MAPREDUCE-1838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ramkumar Vadali updated MAPREDUCE-1838:
---------------------------------------

    Attachment: MAPREDUCE-1838.patch

> DistRaid map tasks have large variance in running times
> -------------------------------------------------------
>
>                 Key: MAPREDUCE-1838
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1838
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: contrib/raid
>    Affects Versions: 0.20.1
>            Reporter: Ramkumar Vadali
>            Priority: Minor
>             Fix For: 0.22.0
>
>         Attachments: MAPREDUCE-1838.patch
>
>
> HDFS RAID uses map-reduce jobs to generate parity files for a set of source files. Each map task gets a subset of files to operate on. The current code assigns files by walking through the list of files given in the constructor of DistRaid
> The problem is that the list of files given to the constructor has the order of (pretty much) the directory listing. When a large number of files is added, files in that order tend to have the same size. Thus a map task can end up with large files where as another can end up with small files, increasing the variance in run times.
> We could do smarter assignment by using the file sizes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.