You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Allen Wittenauer (JIRA)" <ji...@apache.org> on 2014/07/21 23:34:38 UTC

[jira] [Updated] (MAPREDUCE-399) Duplicate destroy of process trees in TaskMemoryManager.

     [ https://issues.apache.org/jira/browse/MAPREDUCE-399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Allen Wittenauer updated MAPREDUCE-399:
---------------------------------------

    Labels: newbie  (was: )

> Duplicate destroy of process trees in TaskMemoryManager.
> --------------------------------------------------------
>
>                 Key: MAPREDUCE-399
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-399
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>            Reporter: Vinod Kumar Vavilapalli
>            Assignee: Vinod Kumar Vavilapalli
>            Priority: Minor
>              Labels: newbie
>
> TaskMemoryManager currently works only on Linux and terminates tasks that transgress memory-limits by first calling TaskTracker.purgeTask() and then explicitly destroying the process tree to be sure that the whole process tree is cleaned up. After HADOOP-2721, we don't need this explicit process-tree destroying as the usual code-path of killing tasks itself takes care of cleaning up the whole process-trees.



--
This message was sent by Atlassian JIRA
(v6.2#6252)