You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Nemon Lou (JIRA)" <ji...@apache.org> on 2014/05/16 13:14:06 UTC

[jira] [Created] (HADOOP-10613) Potential Resource Leaks in FileSystem.CACHE

Nemon Lou created HADOOP-10613:
----------------------------------

             Summary: Potential Resource Leaks in FileSystem.CACHE 
                 Key: HADOOP-10613
                 URL: https://issues.apache.org/jira/browse/HADOOP-10613
             Project: Hadoop Common
          Issue Type: Bug
          Components: fs
    Affects Versions: 2.4.0
            Reporter: Nemon Lou


There is no size limit of the hashmap in  FileSystem.CACHE, which can cause a potential memory leak.
If every time i use a new UGI object to invoke FileSystem.get(conf)  and never invoke FileSystem's close method,this issue will raise.

If there is a size limit of the hashmap or changing fileSystem instances to soft reference,then user's code don't need to consider too much about the cache leak issues.



--
This message was sent by Atlassian JIRA
(v6.2#6252)