You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Amareshwari Sriramadasu (JIRA)" <ji...@apache.org> on 2008/09/24 13:32:45 UTC
[jira] Assigned: (HADOOP-4261) Jobs failing in the init stage will
never cleanup
[ https://issues.apache.org/jira/browse/HADOOP-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Amareshwari Sriramadasu reassigned HADOOP-4261:
-----------------------------------------------
Assignee: Amareshwari Sriramadasu
> Jobs failing in the init stage will never cleanup
> -------------------------------------------------
>
> Key: HADOOP-4261
> URL: https://issues.apache.org/jira/browse/HADOOP-4261
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Reporter: Amar Kamat
> Assignee: Amareshwari Sriramadasu
> Priority: Blocker
>
> Pre HADOOP-3150, if the job fails in the init stage, {{job.kill()}} was called. This used to make sure that the job was cleaned up w.r.t
> - staus set to KILLED/FAILED
> - job files from the system dir are deleted
> - closing of job history files
> - making jobtracker aware of this through {{jobTracker.finalizeJob()}}
> - cleaning up the data structures via {{JobInProgress.garbageCollect()}}
> Now if the job fails in the init stage, {{job.fail()}} is called which doesnt do the cleanup. HADOOP-3150 introduces cleanup tasks which are launched once the job completes i.e killed/failed/succeeded. Jobtracker will never consider this job for scheduling as the job will be in the {{PREP}} state forever.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.