You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Amareshwari Sriramadasu (JIRA)" <ji...@apache.org> on 2008/07/09 07:58:31 UTC

[jira] Updated: (HADOOP-3370) failed tasks may stay forever in TaskTracker.runningJobs

     [ https://issues.apache.org/jira/browse/HADOOP-3370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Amareshwari Sriramadasu updated HADOOP-3370:
--------------------------------------------

    Attachment: patch-3370-0.17.txt

Patch for 0.17

> failed tasks may stay forever in TaskTracker.runningJobs
> --------------------------------------------------------
>
>                 Key: HADOOP-3370
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3370
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.17.0
>            Reporter: Zheng Shao
>            Assignee: Zheng Shao
>            Priority: Critical
>             Fix For: 0.18.0
>
>         Attachments: 3370-1.patch, 3370-2.patch, 3370-3.patch, 3370-4.patch, patch-3370-0.17.txt
>
>
> The net effect of this is that, with a long-running TaskTracker, it takes long long time for ReduceTasks on that TaskTracker to fetch map outputs - TaskTracker does that for all reduce tasks in TaskTracker .runningJobs, including those stale ReduceTasks. There is a 5-second delay between 2 requests, which makes it a long time for a running reducetask to get the map output locations, when there are tens of stale ReduceTasks. Of course this also blows up the memory but that is not a too big problem at its rate.
> I've verified the bug by adding an html table for TaskTracker.runningJobs on TaskTracker http interface, on a 2-node machine, with a single mapper single reducer job, in which mapper succeeds and reducer fails. I can still see the ReduceTask in TaskTracker.runningJobs, while it's not in the first 2 tables (TaskTracker.tasks and TaskTracker.runningTasks).
> Details:
> TaskRunner.run() will call TaskTracker.reportTaskFinished() when the task fails,
> which calls TaskTracker.TaskInProgress.taskFinished,
> which calls TaskTracker.TaskInProgress.cleanup(),
> which calls TaskTracker.tasks.remove(taskId).
> In short, it remove a failed task from TaskTracker.tasks, but not TaskTracker.runningJobs.
> Then the failure is reported to JobTracker.
> JobTracker.heartbeat will call processHeartbeat, 
> which calls updateTaskStatuses, 
> which calls tip.getJob().updateTaskStatus, 
> which calls JobInProgress.failedTask,
> which calls JobTracker.markCompletedTaskAttempt, 
> which puts the task to trackerToMarkedTasksMap, 
> and then JobTracker.heartbeat will call removeMarkedTasks,
> which call removeTaskEntry, 
> which removes it from trackerToTaskMap.
> JobTracker.heartbeat will also call JobTracker.getTasksToKill,
> which reads from trackerToTaskMap for <tracker, task> pairs,
> and ask tracker to KILL the task or job of the task.
> In the case there is only one task for a specific job on a specific tracker 
> and that task failed (NOTE: and that task is not the last failed try of the
> job - otherwise JobTracker.getTasksToKill will pick it up before 
> removeMarkedTasks comes in and remove it from trackerToTaskMap), the task 
> tracker will not receive the KILL task or KILL job message from the JobTracker.
> As a result, the task will remain in TaskTracker.runningJobs forever.
> Solution:
> Remove the task from TaskTracker.runningJobs at the same time when we remove it from TaskTracker.tasks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.