You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Raghu Angadi (JIRA)" <ji...@apache.org> on 2008/05/13 23:59:55 UTC
[jira] Updated: (HADOOP-3382) Memory leak when files are not
cleanly closed
[ https://issues.apache.org/jira/browse/HADOOP-3382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Raghu Angadi updated HADOOP-3382:
---------------------------------
Attachment: memleak.txt
memleak.txt attached.
Koji confirmed that this leak exists and it could lead to many other INode objects to leak, using a simple one node cluster and manually inturrupting writing one of the files.
> Memory leak when files are not cleanly closed
> ---------------------------------------------
>
> Key: HADOOP-3382
> URL: https://issues.apache.org/jira/browse/HADOOP-3382
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.16.0
> Reporter: Raghu Angadi
> Assignee: Raghu Angadi
> Attachments: memleak.txt
>
>
> {{FSNamesystem.internalReleaseCreate()}} in invoked on files that are open for writing but not cleanly closed. e.g. when client invokes {{abandonFileInProgress()}} or when lease expires. It deletes the last block if it has a length of zero. The block is deleted from the file INode but not from {{blocksMap}}. Then leaves a reference to such file until NameNode is restarted. When this happens HADOOP-3381 multiplies amount of memory leak.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.