You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Hudson (JIRA)" <ji...@apache.org> on 2011/04/07 17:42:13 UTC

[jira] [Commented] (MAPREDUCE-1819) RaidNode should be smarter in submitting Raid jobs

    [ https://issues.apache.org/jira/browse/MAPREDUCE-1819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13016910#comment-13016910 ] 

Hudson commented on MAPREDUCE-1819:
-----------------------------------

Integrated in Hadoop-Mapreduce-trunk #643 (See [https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/643/])
    

> RaidNode should be smarter in submitting Raid jobs
> --------------------------------------------------
>
>                 Key: MAPREDUCE-1819
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1819
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: contrib/raid
>    Affects Versions: 0.20.1
>            Reporter: Ramkumar Vadali
>            Assignee: Ramkumar Vadali
>             Fix For: 0.22.0
>
>         Attachments: MAPREDUCE-1819.4.patch, MAPREDUCE-1819.5.patch, MAPREDUCE-1819.patch, MAPREDUCE-1819.patch.2, MAPREDUCE-1819.patch.3
>
>
> The RaidNode currently computes parity files as follows:
> 1. Using RaidNode.selectFiles() to figure out what files to raid for a policy
> 2. Using #1 repeatedly for each configured policy to accumulate a list of files. 
> 3. Submitting a mapreduce job with the list of files from #2 using DistRaid.doDistRaid()
> This task addresses the fact that #2 and #3 happen sequentially. The proposal is to submit a separate mapreduce job for the list of files for each policy and use another thread to track the progress of the submitted jobs. This will help reduce the time taken for files to be raided.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira