You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Ivan A. Veselovsky (JIRA)" <ji...@apache.org> on 2012/09/26 14:43:10 UTC

[jira] [Created] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

Ivan A. Veselovsky created HADOOP-8849:
------------------------------------------

             Summary: FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
                 Key: HADOOP-8849
                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
             Project: Hadoop Common
          Issue Type: Improvement
            Reporter: Ivan A. Veselovsky
            Priority: Minor


2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
 
1) We should grant +rwx permissions the target directories before trying to delete them.
The mentioned methods fail to dlete directories that don't have read or execute permissions.
Actual problem appears if an hdfs-related test is timed out (with a short timeout like tesns of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.

2) We shouldn't rely upon File#delete() return value, use File#exists() instead. 
FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
if (f.exists()) { // 1
  return f.delete(); // 2
}
if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
So, better to write
if (f.exists()) {
  f.delete();
  return !f.exists();
}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13470368#comment-13470368 ] 

Hadoop QA commented on HADOOP-8849:
-----------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12547980/HADOOP-8849-vs-trunk-2.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:green}+1 tests included{color}.  The patch appears to include 1 new or modified test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of javac compiler warnings.

    {color:green}+1 javadoc{color}.  The javadoc tool did not generate any warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with eclipse:eclipse.

    {color:red}-1 findbugs{color}.  The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number of release audit warnings.

    {color:red}-1 core tests{color}.  The patch failed these unit tests in hadoop-common-project/hadoop-common:

                  org.apache.hadoop.ha.TestZKFailoverController

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1562//testReport/
Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/1562//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1562//console

This message is automatically generated.
                
> FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ivan A. Veselovsky
>            Assignee: Ivan A. Veselovsky
>            Priority: Minor
>         Attachments: HADOOP-8849-vs-trunk-2.patch
>
>
> 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to delete them.
> The mentioned methods fail to delete directories that don't have read or execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

Posted by "Ivan A. Veselovsky (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ivan A. Veselovsky updated HADOOP-8849:
---------------------------------------

    Attachment:     (was: HADOOP-8849-vs-trunk.patch)
    
> FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ivan A. Veselovsky
>            Priority: Minor
>
> 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to delete them.
> The mentioned methods fail to delete directories that don't have read or execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Assigned] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

Posted by "Ivan A. Veselovsky (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ivan A. Veselovsky reassigned HADOOP-8849:
------------------------------------------

    Assignee: Ivan A. Veselovsky
    
> FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ivan A. Veselovsky
>            Assignee: Ivan A. Veselovsky
>            Priority: Minor
>
> 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to delete them.
> The mentioned methods fail to delete directories that don't have read or execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

Posted by "Ivan A. Veselovsky (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ivan A. Veselovsky updated HADOOP-8849:
---------------------------------------

    Status: Patch Available  (was: Open)
    
> FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ivan A. Veselovsky
>            Assignee: Ivan A. Veselovsky
>            Priority: Minor
>         Attachments: HADOOP-8849-vs-trunk-3.patch
>
>
> 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to delete them.
> The mentioned methods fail to delete directories that don't have read or execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

Posted by "Ivan A. Veselovsky (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ivan A. Veselovsky updated HADOOP-8849:
---------------------------------------

    Description: 
2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
 
1) We should grant +rwx permissions the target directories before trying to delete them.
The mentioned methods fail to delete directories that don't have read or execute permissions.
Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.

2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. 
FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
if (f.exists()) { // 1
  return f.delete(); // 2
}
if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
So, better to write
if (f.exists()) {
  f.delete();
  return !f.exists();
}

  was:
2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
 
1) We should grant +rwx permissions the target directories before trying to delete them.
The mentioned methods fail to dlete directories that don't have read or execute permissions.
Actual problem appears if an hdfs-related test is timed out (with a short timeout like tesns of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.

2) We shouldn't rely upon File#delete() return value, use File#exists() instead. 
FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
if (f.exists()) { // 1
  return f.delete(); // 2
}
if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
So, better to write
if (f.exists()) {
  f.delete();
  return !f.exists();
}

    
> FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ivan A. Veselovsky
>            Priority: Minor
>         Attachments: HADOOP-8849-vs-trunk.patch
>
>
> 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to delete them.
> The mentioned methods fail to delete directories that don't have read or execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

Posted by "Ivan A. Veselovsky (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13471428#comment-13471428 ] 

Ivan A. Veselovsky commented on HADOOP-8849:
--------------------------------------------

Note: several times the automatic checking system complained about failed test org.apache.hadoop.ha.TestZKFailoverController .
But the proposed change does not anyhow affect this test functionality -- the methods FileUtil#fullyDelete() and FileUtil#fullyDeleteContent() are not called during this test execution: this can be proven with debugger. I think that test org.apache.hadoop.ha.TestZKFailoverController just fails sporadically.
                
> FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ivan A. Veselovsky
>            Assignee: Ivan A. Veselovsky
>            Priority: Minor
>         Attachments: HADOOP-8849-vs-trunk-4.patch
>
>
> 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to delete them.
> The mentioned methods fail to delete directories that don't have read or execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464078#comment-13464078 ] 

Hadoop QA commented on HADOOP-8849:
-----------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12546696/HADOOP-8849-vs-trunk.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:green}+1 tests included{color}.  The patch appears to include 1 new or modified test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of javac compiler warnings.

    {color:green}+1 javadoc{color}.  The javadoc tool did not generate any warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with eclipse:eclipse.

    {color:red}-1 findbugs{color}.  The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number of release audit warnings.

    {color:red}-1 core tests{color}.  The patch failed these unit tests in hadoop-common-project/hadoop-common:

                  org.apache.hadoop.ha.TestZKFailoverController

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1528//testReport/
Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/1528//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1528//console

This message is automatically generated.
                
> FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ivan A. Veselovsky
>            Priority: Minor
>         Attachments: HADOOP-8849-vs-trunk.patch
>
>
> 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to delete them.
> The mentioned methods fail to delete directories that don't have read or execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

Posted by "Ivan A. Veselovsky (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ivan A. Veselovsky updated HADOOP-8849:
---------------------------------------

    Status: Patch Available  (was: Open)
    
> FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ivan A. Veselovsky
>            Priority: Minor
>         Attachments: HADOOP-8849-vs-trunk.patch
>
>
> 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to delete them.
> The mentioned methods fail to delete directories that don't have read or execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

Posted by "Ivan A. Veselovsky (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ivan A. Veselovsky updated HADOOP-8849:
---------------------------------------

    Status: Open  (was: Patch Available)
    
> FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ivan A. Veselovsky
>            Assignee: Ivan A. Veselovsky
>            Priority: Minor
>         Attachments: HADOOP-8849-vs-trunk-2.patch
>
>
> 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to delete them.
> The mentioned methods fail to delete directories that don't have read or execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

Posted by "Ivan A. Veselovsky (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ivan A. Veselovsky updated HADOOP-8849:
---------------------------------------

    Attachment:     (was: HADOOP-8849-vs-trunk-2.patch)
    
> FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ivan A. Veselovsky
>            Assignee: Ivan A. Veselovsky
>            Priority: Minor
>
> 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to delete them.
> The mentioned methods fail to delete directories that don't have read or execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

Posted by "Ivan A. Veselovsky (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ivan A. Veselovsky updated HADOOP-8849:
---------------------------------------

    Attachment: HADOOP-8849-vs-trunk.patch

Attaching the suggested patch.
The test TestFileUtil modified accordingly.
                
> FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ivan A. Veselovsky
>            Priority: Minor
>         Attachments: HADOOP-8849-vs-trunk.patch
>
>
> 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to delete them.
> The mentioned methods fail to dlete directories that don't have read or execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short timeout like tesns of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.
> 2) We shouldn't rely upon File#delete() return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

Posted by "Ivan A. Veselovsky (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ivan A. Veselovsky updated HADOOP-8849:
---------------------------------------

    Status: Patch Available  (was: Open)
    
> FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ivan A. Veselovsky
>            Assignee: Ivan A. Veselovsky
>            Priority: Minor
>         Attachments: HADOOP-8849-vs-trunk-2.patch
>
>
> 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to delete them.
> The mentioned methods fail to delete directories that don't have read or execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13470457#comment-13470457 ] 

Hadoop QA commented on HADOOP-8849:
-----------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12547999/HADOOP-8849-vs-trunk-3.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:green}+1 tests included{color}.  The patch appears to include 1 new or modified test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of javac compiler warnings.

    {color:green}+1 javadoc{color}.  The javadoc tool did not generate any warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number of release audit warnings.

    {color:red}-1 core tests{color}.  The patch failed these unit tests in hadoop-common-project/hadoop-common:

                  org.apache.hadoop.ha.TestZKFailoverController

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1564//testReport/
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1564//console

This message is automatically generated.
                
> FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ivan A. Veselovsky
>            Assignee: Ivan A. Veselovsky
>            Priority: Minor
>         Attachments: HADOOP-8849-vs-trunk-4.patch
>
>
> 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to delete them.
> The mentioned methods fail to delete directories that don't have read or execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

Posted by "Ivan A. Veselovsky (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ivan A. Veselovsky updated HADOOP-8849:
---------------------------------------

    Status: Open  (was: Patch Available)
    
> FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ivan A. Veselovsky
>            Assignee: Ivan A. Veselovsky
>            Priority: Minor
>         Attachments: HADOOP-8849-vs-trunk-3.patch
>
>
> 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to delete them.
> The mentioned methods fail to delete directories that don't have read or execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13470480#comment-13470480 ] 

Hadoop QA commented on HADOOP-8849:
-----------------------------------

{color:green}+1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12548002/HADOOP-8849-vs-trunk-4.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:green}+1 tests included{color}.  The patch appears to include 1 new or modified test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of javac compiler warnings.

    {color:green}+1 javadoc{color}.  The javadoc tool did not generate any warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number of release audit warnings.

    {color:green}+1 core tests{color}.  The patch passed unit tests in hadoop-common-project/hadoop-common.

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1565//testReport/
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1565//console

This message is automatically generated.
                
> FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ivan A. Veselovsky
>            Assignee: Ivan A. Veselovsky
>            Priority: Minor
>         Attachments: HADOOP-8849-vs-trunk-4.patch
>
>
> 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to delete them.
> The mentioned methods fail to delete directories that don't have read or execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

Posted by "Ivan A. Veselovsky (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ivan A. Veselovsky updated HADOOP-8849:
---------------------------------------

    Attachment: HADOOP-8849-vs-trunk-4.patch

corrected the formatting of the code (2 spaces, no tabs.)
                
> FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ivan A. Veselovsky
>            Assignee: Ivan A. Veselovsky
>            Priority: Minor
>         Attachments: HADOOP-8849-vs-trunk-4.patch
>
>
> 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to delete them.
> The mentioned methods fail to delete directories that don't have read or execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

Posted by "Ivan A. Veselovsky (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ivan A. Veselovsky updated HADOOP-8849:
---------------------------------------

    Attachment:     (was: HADOOP-8849-vs-trunk-3.patch)
    
> FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ivan A. Veselovsky
>            Assignee: Ivan A. Veselovsky
>            Priority: Minor
>
> 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to delete them.
> The mentioned methods fail to delete directories that don't have read or execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

Posted by "Ivan A. Veselovsky (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ivan A. Veselovsky updated HADOOP-8849:
---------------------------------------

    Attachment: HADOOP-8849-vs-trunk-2.patch

version #2 of the patch where the findbugs warning is worked around.
                
> FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ivan A. Veselovsky
>            Assignee: Ivan A. Veselovsky
>            Priority: Minor
>         Attachments: HADOOP-8849-vs-trunk-2.patch
>
>
> 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to delete them.
> The mentioned methods fail to delete directories that don't have read or execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

Posted by "Ivan A. Veselovsky (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ivan A. Veselovsky updated HADOOP-8849:
---------------------------------------

    Attachment: HADOOP-8849-vs-trunk-3.patch

fixed another findbugs warning that appeared in path #2.
                
> FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8849
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ivan A. Veselovsky
>            Assignee: Ivan A. Veselovsky
>            Priority: Minor
>         Attachments: HADOOP-8849-vs-trunk-3.patch
>
>
> 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to delete them.
> The mentioned methods fail to delete directories that don't have read or execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and "2", this fragment will return "false", while the file f does not exist upon the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira