You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "dhruba borthakur (JIRA)" <ji...@apache.org> on 2009/05/18 07:13:45 UTC

[jira] Commented: (HADOOP-5825) Recursively deleting a directory with millions of files makes NameNode unresponsive for other commands until the deletion completes

    [ https://issues.apache.org/jira/browse/HADOOP-5825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12710275#action_12710275 ] 

dhruba borthakur commented on HADOOP-5825:
------------------------------------------

Is it possible that FSNamesystem.removePathAndBlocks() is the major bottleneck? If so, we can maybe re-arrange the code to keep the "freeing up resource" part of the code outside the FSNamesystem lock.

> Recursively deleting a directory with millions of files makes NameNode unresponsive for other commands until the deletion completes
> -----------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5825
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5825
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>            Reporter: Suresh Srinivas
>            Assignee: Suresh Srinivas
>
> Delete a directory with millions of files. This could take several minutes (observed 12 mins for 9 million files). While the operation is in progress FSNamesystem lock is held and the requests from clients are not handled until deletion completes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.