You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Vinay Kumar Thota (JIRA)" <ji...@apache.org> on 2010/07/09 20:06:52 UTC

[jira] Updated: (MAPREDUCE-1710) Process tree clean up of exceeding memory limit tasks.

     [ https://issues.apache.org/jira/browse/MAPREDUCE-1710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Vinay Kumar Thota updated MAPREDUCE-1710:
-----------------------------------------

    Attachment: MAPREDUCE-1710.patch

Patch for trunk.

> Process tree clean up of exceeding memory limit tasks.
> ------------------------------------------------------
>
>                 Key: MAPREDUCE-1710
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1710
>             Project: Hadoop Map/Reduce
>          Issue Type: Task
>          Components: test
>            Reporter: Vinay Kumar Thota
>            Assignee: Vinay Kumar Thota
>         Attachments: 1710-ydist_security.patch, 1710-ydist_security.patch, 1710-ydist_security.patch, MAPREDUCE-1710.patch, memorylimittask_1710.patch, memorylimittask_1710.patch, memorylimittask_1710.patch, memorylimittask_1710.patch, memorylimittask_1710.patch
>
>
> 1. Submit a job which would spawn child processes and each of the child processes exceeds the memory limits. Let the job complete . Check if all the child processes are killed, the overall job should fail.
> 2. Submit a job which would spawn child processes and each of the child processes exceeds the memory limits. Kill/fail the job while in progress. Check if all the child processes are killed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.