You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Zhang Bingjun (JIRA)" <ji...@apache.org> on 2009/09/04 10:24:58 UTC
[jira] Created: (HDFS-596) Memory leak in libhdfs:
hdfsFreeFileInfo() in libhdfs does not free memory for mOwner and mGroup
Memory leak in libhdfs: hdfsFreeFileInfo() in libhdfs does not free memory for mOwner and mGroup
------------------------------------------------------------------------------------------------
Key: HDFS-596
URL: https://issues.apache.org/jira/browse/HDFS-596
Project: Hadoop HDFS
Issue Type: Bug
Components: contrib/fuse-dfs
Affects Versions: 0.20.1
Environment: Linux hadoop-001 2.6.28-14-server #47-Ubuntu SMP Sat Jul 25 01:18:34 UTC 2009 i686 GNU/Linux. Namenode with 1GB memory.
Reporter: Zhang Bingjun
Priority: Critical
Fix For: 0.20.1
This bugs affects fuse-dfs severely. In my test, about 1GB memory were exhausted and the fuse-dfs mount directory were disconnected after writing 14000 files.
The bug can be fixed very easily. In function hdfsFreeFileInfo() in file hdfs.c (under c++/libhdfs/) change code block:
//Free the mName
int i;
for (i=0; i < numEntries; ++i) {
if (hdfsFileInfo[i].mName) {
free(hdfsFileInfo[i].mName);
}
}
into:
// free mName, mOwner and mGroup
int i;
for (i=0; i < numEntries; ++i) {
if (hdfsFileInfo[i].mName) {
free(hdfsFileInfo[i].mName);
}
if (hdfsFileInfo[i].mOwner){
free(hdfsFileInfo[i].mOwner);
}
if (hdfsFileInfo[i].mGroup){
free(hdfsFileInfo[i].mGroup);
}
}
I am new to Jira and haven't figured out a way to generate .patch file yet. Could anyone help me do that so that others can commit the changes into the code base. Thanks!
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.