You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-issues@hadoop.apache.org by "Maysam Yabandeh (JIRA)" <ji...@apache.org> on 2014/04/21 19:50:20 UTC

[jira] [Commented] (YARN-1969) Earliest Deadline First Scheduling

    [ https://issues.apache.org/jira/browse/YARN-1969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13975776#comment-13975776 ] 

Maysam Yabandeh commented on YARN-1969:
---------------------------------------

An example of this behavior is when a job preempts its reducers to free space for its mappers. The freed space is however first offered to the app the has already made a reservation on the node. And then it is offered to the queues that are using lower than their fair share, it then it is offered to the queue to which the app belongs, and at the end it is offered to the app that released the resource in the first place. Note that preemption is just one example and we observe similar inefficiencies when preemption is not involved.
There are already open jiras that could alleviate the problem. e.g., if YARN-1197 is finished the MRAppMaster can reuse the reducer's container instead of returning it to RM. Or YARN-1404 would allow for a more flexible scheduling for individual apps. Nevertheless it seems to us augmenting the fair-schedular to take such priorities into account addresses the problem in a more general fashion.

I would highly appreciate your feedback.

> Earliest Deadline First Scheduling
> ----------------------------------
>
>                 Key: YARN-1969
>                 URL: https://issues.apache.org/jira/browse/YARN-1969
>             Project: Hadoop YARN
>          Issue Type: Improvement
>            Reporter: Maysam Yabandeh
>            Assignee: Maysam Yabandeh
>
> What we are observing is that some big jobs with many allocated containers are waiting for a few containers to finish. Under *fair-share scheduling* however they have a low priority since there are other jobs (usually much smaller, new comers) that are using resources way below their fair share, hence new released containers are not offered to the big, yet close-to-be-finished job. Nevertheless, everybody would benefit from an "unfair" scheduling that offers the resource to the big job since the sooner the big job finishes, the sooner it releases its "many" allocated resources to be used by other jobs.In other words, what we require is a kind of variation of *Earliest Deadline First scheduling*, that takes into account the number of already-allocated resources and estimated time to finish.
> http://en.wikipedia.org/wiki/Earliest_deadline_first_scheduling
> For example, if a job is using MEM GB of memory and is expected to finish in TIME minutes, the priority in scheduling would be a function p of (MEM, TIME). The expected time to finish can be estimated by the AppMaster using TaskRuntimeEstimator#estimatedRuntime and be supplied to RM in the resource request messages. To be less susceptible to the issue of apps gaming the system, we can have this scheduling limited to *only within a queue*: i.e., adding a EarliestDeadlinePolicy extends SchedulingPolicy and let the queues to use it by setting the "schedulingPolicy" field.



--
This message was sent by Atlassian JIRA
(v6.2#6252)