You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Matei Zaharia (JIRA)" <ji...@apache.org> on 2009/06/04 00:28:07 UTC

[jira] Commented: (HADOOP-5186) Improve limit handling in fairshare scheduler

    [ https://issues.apache.org/jira/browse/HADOOP-5186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12716088#action_12716088 ] 

Matei Zaharia commented on HADOOP-5186:
---------------------------------------

A note on progress for this issue: I have a tested patch that removes deficits and also adds support for FIFO pools (but not yet lazy job init as discussed here). However, I am waiting for HADOOP-4665 and HADOOP-4667 to be committed before posting it because it depends on those. 

> Improve limit handling in fairshare scheduler
> ---------------------------------------------
>
>                 Key: HADOOP-5186
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5186
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: contrib/fair-share
>            Reporter: Hemanth Yamijala
>            Priority: Minor
>
> The fairshare scheduler has a way by which it can limit the number of jobs in a pool by setting the maxRunningJobs parameter in its allocations definition. This limit is treated as a hard limit, and comes into effect even if the cluster is free to run more jobs, resulting in underutilization. Possibly the same thing happens with the parameter maxRunningJobs for user and userMaxJobsDefault. It may help to treat these as a soft limit and run additional jobs to keep the cluster fully utilized.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.