You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Matei Zaharia (JIRA)" <ji...@apache.org> on 2009/06/04 00:26:07 UTC

[jira] Commented: (HADOOP-4803) large pending jobs hog resources

    [ https://issues.apache.org/jira/browse/HADOOP-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12716087#action_12716087 ] 

Matei Zaharia commented on HADOOP-4803:
---------------------------------------

A note on progress for this issue: I have a tested patch that removes deficits and also adds support for FIFO pools, but I am waiting for HADOOP-4665 and HADOOP-4667 to be committed before posting it because it depends on those.

> large pending jobs hog resources
> --------------------------------
>
>                 Key: HADOOP-4803
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4803
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: contrib/fair-share
>            Reporter: Joydeep Sen Sarma
>            Assignee: Matei Zaharia
>
> observing the cluster over the last day - one thing i noticed is that small jobs (single digit tasks) are not doing a good job competing against large jobs. what seems to happen is that:
> - large job comes along and needs to wait for a while for other large jobs.
> - slots are slowly transfered from one large job to another
> - small tasks keep waiting forever.
> is this an artifact of deficit based scheduling? it seems that long pending large jobs are out-scheduling small jobs

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.