You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Angus He (JIRA)" <ji...@apache.org> on 2013/05/12 05:47:15 UTC
[jira] [Created] (MAPREDUCE-5241) FairScheduler preeempts tasks
from the pool whose fair share is below the mininum quota
Angus He created MAPREDUCE-5241:
-----------------------------------
Summary: FairScheduler preeempts tasks from the pool whose fair share is below the mininum quota
Key: MAPREDUCE-5241
URL: https://issues.apache.org/jira/browse/MAPREDUCE-5241
Project: Hadoop Map/Reduce
Issue Type: Bug
Components: contrib/fair-share
Affects Versions: 0.22.0, 0.21.0, 0.20.2
Reporter: Angus He
The code snippet below is from class FairScheduler.UpdateThread:
{code}
public void run() {
while (running) {
try {
Thread.sleep(updateInterval);
update();
dumpIfNecessary();
preemptTasksIfNecessary();
} catch (Exception e) {
LOG.error("Exception in fair scheduler UpdateThread", e);
}
}
}
{code}
Suppose a pool A with the minimum shares of map slots set to 10. And a user submits a job with 5 maps to pool A and the 5 maps starts to run immediately. After update() in UpdateThread.run() is executed, pool A gets map shares of 5, and pool A has 5 running map tasks. Before preemptTasksIfNecessary() is called, a speculative map task could be started, and pool A has 6 running map tasks now, but still gets map shares of 5, so the new speculative map task could be preempted. For the number of running map tasks(6)is still less than the pool A's minimun shares(10), probably it is wrong to preempt any tasks from pool A. A possible fix is to make the call to update() and preemptTasksIfNecessary() atomic to eliminate the race condition, and the following is the first try:
{code}
public void run() {
while (running) {
try {
Thread.sleep(updateInterval);
synchronized (taskTrackerManager) {
synchronized (FairScheduler.this) {
update();
dumpIfNecessary();
preemptTasksIfNecessary();
}
}
} catch (Exception e) {
LOG.error("Exception in fair scheduler UpdateThread", e);
}
}
}
{code}
Another possible fix is to call FairScheduler.update() in FairScheduler.assignTasks() to re-calculate the fairs share.
Any comments?
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira