You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Suresh Srinivas (JIRA)" <ji...@apache.org> on 2009/05/14 02:18:45 UTC
[jira] Created: (HADOOP-5825) Recursively deleting a directory with
millions of files makes NameNode unresponsive for other commands until the
deletion completes
Recursively deleting a directory with millions of files makes NameNode unresponsive for other commands until the deletion completes
-----------------------------------------------------------------------------------------------------------------------------------
Key: HADOOP-5825
URL: https://issues.apache.org/jira/browse/HADOOP-5825
Project: Hadoop Core
Issue Type: Bug
Reporter: Suresh Srinivas
Delete a directory with millions of files. This could take several minutes (observed 12 mins for 9 million files). While the operation is in progress FSNamesystem lock is held and the requests from clients are not handled until deletion completes.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-5825) Recursively deleting a directory with
millions of files makes NameNode unresponsive for other commands until the
deletion completes
Posted by "dhruba borthakur (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
dhruba borthakur updated HADOOP-5825:
-------------------------------------
Component/s: dfs
> Recursively deleting a directory with millions of files makes NameNode unresponsive for other commands until the deletion completes
> -----------------------------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-5825
> URL: https://issues.apache.org/jira/browse/HADOOP-5825
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Reporter: Suresh Srinivas
> Assignee: Suresh Srinivas
>
> Delete a directory with millions of files. This could take several minutes (observed 12 mins for 9 million files). While the operation is in progress FSNamesystem lock is held and the requests from clients are not handled until deletion completes.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Assigned: (HADOOP-5825) Recursively deleting a directory
with millions of files makes NameNode unresponsive for other commands until
the deletion completes
Posted by "Suresh Srinivas (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Suresh Srinivas reassigned HADOOP-5825:
---------------------------------------
Assignee: Suresh Srinivas
> Recursively deleting a directory with millions of files makes NameNode unresponsive for other commands until the deletion completes
> -----------------------------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-5825
> URL: https://issues.apache.org/jira/browse/HADOOP-5825
> Project: Hadoop Core
> Issue Type: Bug
> Reporter: Suresh Srinivas
> Assignee: Suresh Srinivas
>
> Delete a directory with millions of files. This could take several minutes (observed 12 mins for 9 million files). While the operation is in progress FSNamesystem lock is held and the requests from clients are not handled until deletion completes.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-5825) Recursively deleting a directory
with millions of files makes NameNode unresponsive for other commands until
the deletion completes
Posted by "dhruba borthakur (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12710275#action_12710275 ]
dhruba borthakur commented on HADOOP-5825:
------------------------------------------
Is it possible that FSNamesystem.removePathAndBlocks() is the major bottleneck? If so, we can maybe re-arrange the code to keep the "freeing up resource" part of the code outside the FSNamesystem lock.
> Recursively deleting a directory with millions of files makes NameNode unresponsive for other commands until the deletion completes
> -----------------------------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-5825
> URL: https://issues.apache.org/jira/browse/HADOOP-5825
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Reporter: Suresh Srinivas
> Assignee: Suresh Srinivas
>
> Delete a directory with millions of files. This could take several minutes (observed 12 mins for 9 million files). While the operation is in progress FSNamesystem lock is held and the requests from clients are not handled until deletion completes.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.