You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Arun C Murthy (JIRA)" <ji...@apache.org> on 2008/10/01 01:35:44 UTC
[jira] Updated: (HADOOP-4018) limit memory usage in jobtracker
[ https://issues.apache.org/jira/browse/HADOOP-4018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Arun C Murthy updated HADOOP-4018:
----------------------------------
Status: Open (was: Patch Available)
Cancelling patch while Amar's comments are being accomodated...
Minor nit: the variable maxSplits in JobInProgress should probably be renamed to 'maxTasks' - it's misleading.
I'm not super excited about using 0 as the default value for mapred.max.tasks.per.job - this has come up before and I guess we need to come up with a way of specifying 'UNLIMITED' in our configuration files.
> limit memory usage in jobtracker
> --------------------------------
>
> Key: HADOOP-4018
> URL: https://issues.apache.org/jira/browse/HADOOP-4018
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Reporter: dhruba borthakur
> Assignee: dhruba borthakur
> Attachments: maxSplits.patch, maxSplits2.patch, maxSplits3.patch, maxSplits4.patch, maxSplits5.patch, maxSplits6.patch, maxSplits7.patch, maxSplits8.patch
>
>
> We have seen instances when a user submitted a job with many thousands of mappers. The JobTracker was running with 3GB heap, but it was still not enough to prevent memory trashing from Garbage collection; effectively the Job Tracker was not able to serve jobs and had to be restarted.
> One simple proposal would be to limit the maximum number of tasks per job. This can be a configurable parameter. Is there other things that eat huge globs of memory in job Tracker?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.