You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Suresh Srinivas (Created) (JIRA)" <ji...@apache.org> on 2012/02/12 03:50:59 UTC
[jira] [Created] (HDFS-2938) Recursive delete of a large directory
makes namenode unresponsive
Recursive delete of a large directory makes namenode unresponsive
-----------------------------------------------------------------
Key: HDFS-2938
URL: https://issues.apache.org/jira/browse/HDFS-2938
Project: Hadoop HDFS
Issue Type: Bug
Components: name-node
Affects Versions: 0.22.0, 0.24.0, 0.23.1
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
When deleting a large directory with millions of files, namenode holding FSNamesystem lock will make it unresponsive for other request. In this scenario HDFS-173 added a mechanism to delete blocks in smaller chunks holding the locks. With new read/write lock changes, the mechanism from HDFS-173 is lost. Need to resurrect the mechanism back. Also a good unit test/update to existing unit test is needed to catch future errors with this functionality.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira