You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-dev@hadoop.apache.org by "Allen Wittenauer (JIRA)" <ji...@apache.org> on 2014/07/30 19:54:38 UTC
[jira] [Resolved] (MAPREDUCE-1710) Process tree clean up of
exceeding memory limit tasks.
[ https://issues.apache.org/jira/browse/MAPREDUCE-1710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Allen Wittenauer resolved MAPREDUCE-1710.
-----------------------------------------
Resolution: Fixed
Fixed in 2.x. Closing.
> Process tree clean up of exceeding memory limit tasks.
> ------------------------------------------------------
>
> Key: MAPREDUCE-1710
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1710
> Project: Hadoop Map/Reduce
> Issue Type: Task
> Components: test
> Reporter: Vinay Kumar Thota
> Assignee: Vinay Kumar Thota
> Attachments: 1710-ydist_security.patch, 1710-ydist_security.patch, 1710-ydist_security.patch, ASF.LICENSE.NOT.GRANTED--memorylimittask_1710.patch, MAPREDUCE-1710.patch, memorylimittask_1710.patch, memorylimittask_1710.patch, memorylimittask_1710.patch, memorylimittask_1710.patch
>
>
> 1. Submit a job which would spawn child processes and each of the child processes exceeds the memory limits. Let the job complete . Check if all the child processes are killed, the overall job should fail.
> 2. Submit a job which would spawn child processes and each of the child processes exceeds the memory limits. Kill/fail the job while in progress. Check if all the child processes are killed.
--
This message was sent by Atlassian JIRA
(v6.2#6252)