You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Hudson (JIRA)" <ji...@apache.org> on 2009/06/11 22:00:10 UTC
[jira] Commented: (HADOOP-5883) TaskMemoryMonitorThread might shoot
down tasks even if their processes momentarily exceed the requested memory
[ https://issues.apache.org/jira/browse/HADOOP-5883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12718617#action_12718617 ]
Hudson commented on HADOOP-5883:
--------------------------------
Integrated in Hadoop-trunk #863 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/863/])
> TaskMemoryMonitorThread might shoot down tasks even if their processes momentarily exceed the requested memory
> --------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-5883
> URL: https://issues.apache.org/jira/browse/HADOOP-5883
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Reporter: Hemanth Yamijala
> Assignee: Hemanth Yamijala
> Fix For: 0.20.1
>
> Attachments: HADOOP-5883-20.patch, HADOOP-5883-20.patch, HADOOP-5883.patch, HADOOP-5883.patch, HADOOP-5883.patch
>
>
> Currently the TaskMemoryMonitorThread kills tasks as soon as it detects they are consuming more memory than the max value specified. There are valid cases (see HADOOP-5059) where if a program is executed from the task, it might momentarily occupy twice the amount of memory for a short time. Ideally the monitoring thread should handle this case.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.