You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-dev@hadoop.apache.org by "Ahmed Radwan (Created) (JIRA)" <ji...@apache.org> on 2011/11/04 00:43:34 UTC
[jira] [Created] (MAPREDUCE-3343) TaskTracker Out of Memory because
of distributed cache
TaskTracker Out of Memory because of distributed cache
------------------------------------------------------
Key: MAPREDUCE-3343
URL: https://issues.apache.org/jira/browse/MAPREDUCE-3343
Project: Hadoop Map/Reduce
Issue Type: Bug
Components: mrv1
Affects Versions: 0.20.205.0
Reporter: Ahmed Radwan
This Out of Memory happens when you run large number of jobs (using the distributed cache) on a TaskTracker.
Seems the basic issue is with the distributedCacheManager (instance of TrackerDistributedCacheManager in TaskTracker.java), this gets created during TaskTracker.initialize(), and it keeps references to TaskDistributedCacheManager for every submitted job via the jobArchives Map, also references to CacheStatus via cachedArchives map. I am not seeing these cleaned up between jobs, so this can out of memory problems after really large number of jobs are submitted. We have seen this issue in a number of cases.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira