You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Vinod K V (JIRA)" <ji...@apache.org> on 2010/03/12 11:05:27 UTC

[jira] Commented: (HADOOP-6631) FileUtil.fullyDelete() should continue to delete other files despite failure at any level.

    [ https://issues.apache.org/jira/browse/HADOOP-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12844442#action_12844442 ] 

Vinod K V commented on HADOOP-6631:
-----------------------------------

I think this is a critical bug in a util method that is used throughout MAPREDUCE extensively.

The important implication of this bug is the unreclaimed disk space because of single/few undeleteable files/dirs.

> FileUtil.fullyDelete() should continue to delete other files despite failure at any level.
> ------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-6631
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6631
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs, util
>            Reporter: Vinod K V
>             Fix For: 0.22.0
>
>
> Ravi commented about this on HADOOP-6536. Paraphrasing...
> Currently FileUtil.fullyDelete(myDir) comes out stopping deletion of other files/directories if it is unable to delete a file/dir(say because of not having permissions to delete that file/dir) anywhere under myDir. This is because we return from method if the recursive call "if(!fullyDelete()) {return false;}" fails at any level of recursion.
> Shouldn't it continue with deletion of other files/dirs continuing in the for loop instead of returning false here ?
> I guess fullyDelete() should delete as many files as possible(similar to 'rm -rf').

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.