You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Arun C Murthy (JIRA)" <ji...@apache.org> on 2009/06/03 03:00:08 UTC

[jira] Updated: (HADOOP-5964) Fix the 'cluster drain' problem in the Capacity Scheduler wrt High RAM Jobs

     [ https://issues.apache.org/jira/browse/HADOOP-5964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-5964:
----------------------------------

    Attachment: HADOOP-5964_0_20090602.patch

Very early patch.

I haven't introduced a WAITING_FOR_SLOT task state since it might be desirable for the ExpireLaunchingTasks thread to actually kill high-ram jobs which have waited too long. Thoughts?

> Fix the 'cluster drain' problem in the Capacity Scheduler wrt High RAM Jobs
> ---------------------------------------------------------------------------
>
>                 Key: HADOOP-5964
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5964
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: contrib/capacity-sched
>    Affects Versions: 0.20.0
>            Reporter: Arun C Murthy
>             Fix For: 0.21.0
>
>         Attachments: HADOOP-5964_0_20090602.patch
>
>
> When a HighRAMJob turns up at the head of the queue, the current implementation of support for HighRAMJobs in the Capacity Scheduler has problem in that the scheduler stops assigning tasks to all TaskTrackers in the cluster until a HighRAMJob finds a suitable TaskTrackers for all its tasks.
> This causes a severe utilization problem since effectively no new tasks are allowed to run until the HighRAMJob (at the head of the queue) gets slots.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.