You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Hemanth Yamijala (JIRA)" <ji...@apache.org> on 2008/10/01 05:19:44 UTC

[jira] Commented: (HADOOP-4018) limit memory usage in jobtracker

    [ https://issues.apache.org/jira/browse/HADOOP-4018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12635938#action_12635938 ] 

Hemanth Yamijala commented on HADOOP-4018:
------------------------------------------

We use -1 in HADOOP-3759, Long.MAX_VALUE in HADOOP-657 and 0 or null at a few places. Somehow, it seems like a value that can by any argument never ever occur is a good choice to specify either 'UNLIMITED' or 'DONT CARE' or 'TURNED OFF' or whatever. -1 seems one such. *smile*



> limit memory usage in jobtracker
> --------------------------------
>
>                 Key: HADOOP-4018
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4018
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: maxSplits.patch, maxSplits2.patch, maxSplits3.patch, maxSplits4.patch, maxSplits5.patch, maxSplits6.patch, maxSplits7.patch, maxSplits8.patch
>
>
> We have seen instances when a user submitted a job with many thousands of mappers. The JobTracker was running with 3GB heap, but it was still not enough to prevent memory trashing from Garbage collection; effectively the Job Tracker was not able to serve jobs and had to be restarted.
> One simple proposal would be to limit the maximum number of tasks per job. This can be a configurable parameter. Is there other things that eat huge globs of memory in job Tracker?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.