You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Hudson (JIRA)" <ji...@apache.org> on 2011/02/02 00:13:29 UTC

[jira] Commented: (MAPREDUCE-2256) FairScheduler fairshare preemption from multiple pools may preempt all tasks from one pool causing that pool to go below fairshare.

    [ https://issues.apache.org/jira/browse/MAPREDUCE-2256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12989444#comment-12989444 ] 

Hudson commented on MAPREDUCE-2256:
-----------------------------------

Integrated in Hadoop-Mapreduce-trunk-Commit #595 (See [https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/595/])
    

> FairScheduler fairshare preemption from multiple pools may preempt all tasks from one pool causing that pool to go below fairshare.
> -----------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-2256
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2256
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: contrib/fair-share
>    Affects Versions: 0.21.1, 0.22.0
>            Reporter: Priyo Mustafi
>            Assignee: Priyo Mustafi
>             Fix For: 0.22.0
>
>         Attachments: mapreduce-2256_0_22.txt
>
>
> Scenarios:
> You have a cluster with 600 map slots and 3 pools.  Fairshare for each pool is 200 to start with.  Fairsharepreemption timeout is 5 mins.
> 1)  Pool1 schedules 300 map tasks first
> 2)  Pool2 then schedules another 300 map tasks
> 3)  Pool3 demands 300 map tasks but doesn't get any slot as all slots are taken.
> 4)  After 5 mins pool3 should preempt 200 map-slots.  Instead of peempting 100 slots each from pool1 and pool2, the bug would cause it to preempt all 200 slots from pool2 (last started) causing it to go below fairshare.  This is happening because the preemptTask method is not reducing the tasks left from a pool while preempting the tasks.  
> The above scenario could be an extreme case but some amount of excess preemption would happen because of this bug.
> The patch I created was for 0.22.0 but the code fix should work on 0.21  as well as looks like it has the same bug.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira