You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Tom White (JIRA)" <ji...@apache.org> on 2007/01/18 10:58:29 UTC

[jira] Created: (HADOOP-901) Make S3FileSystem do recursive renames

Make S3FileSystem do recursive renames
--------------------------------------

                 Key: HADOOP-901
                 URL: https://issues.apache.org/jira/browse/HADOOP-901
             Project: Hadoop
          Issue Type: Bug
          Components: fs
    Affects Versions: 0.10.1
            Reporter: Tom White


>From Mike Smith:

I went through the S3FileSystem.java codes and fixed the renameRaw() method.
Now, it iterates through the folders recursively and rename those. Also, in
the case of existing destination folder, it moves the src folder under the
dst folder.

Here is the piece code that should be replaced in S3FileSystem.java.
renameRaw() method should be replaced by the following methods:


@Override
 public boolean renameRaw(Path src, Path dst) throws IOException {

  Path absoluteDst = makeAbsolute(dst);
  Path absoluteSrc = makeAbsolute(src);

  INode inode = store.getINode(absoluteDst);
  // checking to see of dst folder exist. In this case moves the
  // src folder under the existing path.
  if (inode != null && inode.isDirectory()) {
   Path newDst = new Path(absoluteDst.toString
()+"/"+absoluteSrc.getName());
   return renameRaw(src,newDst,src);
  } else {
  // if the dst folder does not exist, then the dst folder will be created.

  return renameRaw(src,dst,src);
  }
 }

 // recursively goes through all the subfolders and rename those.
 public boolean renameRaw(Path src, Path dst,Path orgSrc) throws
IOException {
    Path absoluteSrc = makeAbsolute(src);
    Path newDst = new Path(src.toString().replaceFirst(orgSrc.toString(),
dst.toString()));
    Path absoluteDst = makeAbsolute(newDst);
    LOG.info(absoluteSrc.toString());
    INode inode = store.getINode (absoluteSrc);
    if (inode == null) {
      return false;
    }
    if (inode.isFile()) {
     store.storeINode(makeAbsolute(absoluteDst), inode);
    } else {
      store.storeINode (makeAbsolute(absoluteDst), inode);
      Path[] contents = listPathsRaw(absoluteSrc);
      if (contents == null) {
        return false;
      }
      for (Path p : contents) {
        if (! renameRaw(p,dst,orgSrc)) {
          return false;
        }

      }
    }
    store.deleteINode(absoluteSrc);
    return true;
}

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Commented: (HADOOP-901) Make S3FileSystem do recursive renames

Posted by "Tom White (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12465882 ] 

Tom White commented on HADOOP-901:
----------------------------------

I agree the path manipulation looks wrong. I'm currently writing a set of test cases - there's a surprising number of them.

> Make S3FileSystem do recursive renames
> --------------------------------------
>
>                 Key: HADOOP-901
>                 URL: https://issues.apache.org/jira/browse/HADOOP-901
>             Project: Hadoop
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.10.1
>            Reporter: Tom White
>
> From Mike Smith:
> I went through the S3FileSystem.java codes and fixed the renameRaw() method.
> Now, it iterates through the folders recursively and rename those. Also, in
> the case of existing destination folder, it moves the src folder under the
> dst folder.
> Here is the piece code that should be replaced in S3FileSystem.java.
> renameRaw() method should be replaced by the following methods:
> @Override
>  public boolean renameRaw(Path src, Path dst) throws IOException {
>   Path absoluteDst = makeAbsolute(dst);
>   Path absoluteSrc = makeAbsolute(src);
>   INode inode = store.getINode(absoluteDst);
>   // checking to see of dst folder exist. In this case moves the
>   // src folder under the existing path.
>   if (inode != null && inode.isDirectory()) {
>    Path newDst = new Path(absoluteDst.toString
> ()+"/"+absoluteSrc.getName());
>    return renameRaw(src,newDst,src);
>   } else {
>   // if the dst folder does not exist, then the dst folder will be created.
>   return renameRaw(src,dst,src);
>   }
>  }
>  // recursively goes through all the subfolders and rename those.
>  public boolean renameRaw(Path src, Path dst,Path orgSrc) throws
> IOException {
>     Path absoluteSrc = makeAbsolute(src);
>     Path newDst = new Path(src.toString().replaceFirst(orgSrc.toString(),
> dst.toString()));
>     Path absoluteDst = makeAbsolute(newDst);
>     LOG.info(absoluteSrc.toString());
>     INode inode = store.getINode (absoluteSrc);
>     if (inode == null) {
>       return false;
>     }
>     if (inode.isFile()) {
>      store.storeINode(makeAbsolute(absoluteDst), inode);
>     } else {
>       store.storeINode (makeAbsolute(absoluteDst), inode);
>       Path[] contents = listPathsRaw(absoluteSrc);
>       if (contents == null) {
>         return false;
>       }
>       for (Path p : contents) {
>         if (! renameRaw(p,dst,orgSrc)) {
>           return false;
>         }
>       }
>     }
>     store.deleteINode(absoluteSrc);
>     return true;
> }

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Updated: (HADOOP-901) Make S3FileSystem do recursive renames

Posted by "Tom White (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Tom White updated HADOOP-901:
-----------------------------

    Attachment: hadoop-901.patch

> Make S3FileSystem do recursive renames
> --------------------------------------
>
>                 Key: HADOOP-901
>                 URL: https://issues.apache.org/jira/browse/HADOOP-901
>             Project: Hadoop
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.10.1
>            Reporter: Tom White
>         Attachments: hadoop-901.patch
>
>
> From Mike Smith:
> I went through the S3FileSystem.java codes and fixed the renameRaw() method.
> Now, it iterates through the folders recursively and rename those. Also, in
> the case of existing destination folder, it moves the src folder under the
> dst folder.
> Here is the piece code that should be replaced in S3FileSystem.java.
> renameRaw() method should be replaced by the following methods:
> @Override
>  public boolean renameRaw(Path src, Path dst) throws IOException {
>   Path absoluteDst = makeAbsolute(dst);
>   Path absoluteSrc = makeAbsolute(src);
>   INode inode = store.getINode(absoluteDst);
>   // checking to see of dst folder exist. In this case moves the
>   // src folder under the existing path.
>   if (inode != null && inode.isDirectory()) {
>    Path newDst = new Path(absoluteDst.toString
> ()+"/"+absoluteSrc.getName());
>    return renameRaw(src,newDst,src);
>   } else {
>   // if the dst folder does not exist, then the dst folder will be created.
>   return renameRaw(src,dst,src);
>   }
>  }
>  // recursively goes through all the subfolders and rename those.
>  public boolean renameRaw(Path src, Path dst,Path orgSrc) throws
> IOException {
>     Path absoluteSrc = makeAbsolute(src);
>     Path newDst = new Path(src.toString().replaceFirst(orgSrc.toString(),
> dst.toString()));
>     Path absoluteDst = makeAbsolute(newDst);
>     LOG.info(absoluteSrc.toString());
>     INode inode = store.getINode (absoluteSrc);
>     if (inode == null) {
>       return false;
>     }
>     if (inode.isFile()) {
>      store.storeINode(makeAbsolute(absoluteDst), inode);
>     } else {
>       store.storeINode (makeAbsolute(absoluteDst), inode);
>       Path[] contents = listPathsRaw(absoluteSrc);
>       if (contents == null) {
>         return false;
>       }
>       for (Path p : contents) {
>         if (! renameRaw(p,dst,orgSrc)) {
>           return false;
>         }
>       }
>     }
>     store.deleteINode(absoluteSrc);
>     return true;
> }

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Commented: (HADOOP-901) Make S3FileSystem do recursive renames

Posted by "Mike Smith (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12466549 ] 

Mike Smith commented on HADOOP-901:
-----------------------------------

Thanks Tom. I just tested the patch and it works fine.

> Make S3FileSystem do recursive renames
> --------------------------------------
>
>                 Key: HADOOP-901
>                 URL: https://issues.apache.org/jira/browse/HADOOP-901
>             Project: Hadoop
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.10.1
>            Reporter: Tom White
>         Attachments: hadoop-901.patch
>
>
> From Mike Smith:
> I went through the S3FileSystem.java codes and fixed the renameRaw() method.
> Now, it iterates through the folders recursively and rename those. Also, in
> the case of existing destination folder, it moves the src folder under the
> dst folder.
> Here is the piece code that should be replaced in S3FileSystem.java.
> renameRaw() method should be replaced by the following methods:
> @Override
>  public boolean renameRaw(Path src, Path dst) throws IOException {
>   Path absoluteDst = makeAbsolute(dst);
>   Path absoluteSrc = makeAbsolute(src);
>   INode inode = store.getINode(absoluteDst);
>   // checking to see of dst folder exist. In this case moves the
>   // src folder under the existing path.
>   if (inode != null && inode.isDirectory()) {
>    Path newDst = new Path(absoluteDst.toString
> ()+"/"+absoluteSrc.getName());
>    return renameRaw(src,newDst,src);
>   } else {
>   // if the dst folder does not exist, then the dst folder will be created.
>   return renameRaw(src,dst,src);
>   }
>  }
>  // recursively goes through all the subfolders and rename those.
>  public boolean renameRaw(Path src, Path dst,Path orgSrc) throws
> IOException {
>     Path absoluteSrc = makeAbsolute(src);
>     Path newDst = new Path(src.toString().replaceFirst(orgSrc.toString(),
> dst.toString()));
>     Path absoluteDst = makeAbsolute(newDst);
>     LOG.info(absoluteSrc.toString());
>     INode inode = store.getINode (absoluteSrc);
>     if (inode == null) {
>       return false;
>     }
>     if (inode.isFile()) {
>      store.storeINode(makeAbsolute(absoluteDst), inode);
>     } else {
>       store.storeINode (makeAbsolute(absoluteDst), inode);
>       Path[] contents = listPathsRaw(absoluteSrc);
>       if (contents == null) {
>         return false;
>       }
>       for (Path p : contents) {
>         if (! renameRaw(p,dst,orgSrc)) {
>           return false;
>         }
>       }
>     }
>     store.deleteINode(absoluteSrc);
>     return true;
> }

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Re: [jira] Commented: (HADOOP-901) Make S3FileSystem do recursive renames

Posted by Nigel Daley <nd...@yahoo-inc.com>.
Sorry, false negative.  The patch process is catching up on today's  
patches and this one has already been committed.

On Jan 22, 2007, at 9:38 PM, Hadoop QA (JIRA) wrote:

>
>     [ https://issues.apache.org/jira/browse/HADOOP-901? 
> page=com.atlassian.jira.plugin.system.issuetabpanels:comment- 
> tabpanel#action_12466647 ]
>
> Hadoop QA commented on HADOOP-901:
> ----------------------------------
>
> -1, because the patch command could not apply the latest attachment  
> (http://issues.apache.org/jira/secure/attachment/12349348/ 
> hadoop-901.patch) as a patch to trunk revision r498829. Please note  
> that this message is automatically generated and may represent a  
> problem with the automation system and not the patch.
>
>> Make S3FileSystem do recursive renames
>> --------------------------------------
>>
>>                 Key: HADOOP-901
>>                 URL: https://issues.apache.org/jira/browse/HADOOP-901
>>             Project: Hadoop
>>          Issue Type: Bug
>>          Components: fs
>>    Affects Versions: 0.10.1
>>            Reporter: Tom White
>>             Fix For: 0.11.0
>>
>>         Attachments: hadoop-901.patch
>>
>>
>> From Mike Smith:
>> I went through the S3FileSystem.java codes and fixed the renameRaw 
>> () method.
>> Now, it iterates through the folders recursively and rename those.  
>> Also, in
>> the case of existing destination folder, it moves the src folder  
>> under the
>> dst folder.
>> Here is the piece code that should be replaced in S3FileSystem.java.
>> renameRaw() method should be replaced by the following methods:
>> @Override
>>  public boolean renameRaw(Path src, Path dst) throws IOException {
>>   Path absoluteDst = makeAbsolute(dst);
>>   Path absoluteSrc = makeAbsolute(src);
>>   INode inode = store.getINode(absoluteDst);
>>   // checking to see of dst folder exist. In this case moves the
>>   // src folder under the existing path.
>>   if (inode != null && inode.isDirectory()) {
>>    Path newDst = new Path(absoluteDst.toString
>> ()+"/"+absoluteSrc.getName());
>>    return renameRaw(src,newDst,src);
>>   } else {
>>   // if the dst folder does not exist, then the dst folder will be  
>> created.
>>   return renameRaw(src,dst,src);
>>   }
>>  }
>>  // recursively goes through all the subfolders and rename those.
>>  public boolean renameRaw(Path src, Path dst,Path orgSrc) throws
>> IOException {
>>     Path absoluteSrc = makeAbsolute(src);
>>     Path newDst = new Path(src.toString().replaceFirst 
>> (orgSrc.toString(),
>> dst.toString()));
>>     Path absoluteDst = makeAbsolute(newDst);
>>     LOG.info(absoluteSrc.toString());
>>     INode inode = store.getINode (absoluteSrc);
>>     if (inode == null) {
>>       return false;
>>     }
>>     if (inode.isFile()) {
>>      store.storeINode(makeAbsolute(absoluteDst), inode);
>>     } else {
>>       store.storeINode (makeAbsolute(absoluteDst), inode);
>>       Path[] contents = listPathsRaw(absoluteSrc);
>>       if (contents == null) {
>>         return false;
>>       }
>>       for (Path p : contents) {
>>         if (! renameRaw(p,dst,orgSrc)) {
>>           return false;
>>         }
>>       }
>>     }
>>     store.deleteINode(absoluteSrc);
>>     return true;
>> }
>
> -- 
> This message is automatically generated by JIRA.
> -
> You can reply to this email to add a comment to the issue online.
>


[jira] Commented: (HADOOP-901) Make S3FileSystem do recursive renames

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12466647 ] 

Hadoop QA commented on HADOOP-901:
----------------------------------

-1, because the patch command could not apply the latest attachment (http://issues.apache.org/jira/secure/attachment/12349348/hadoop-901.patch) as a patch to trunk revision r498829. Please note that this message is automatically generated and may represent a problem with the automation system and not the patch.

> Make S3FileSystem do recursive renames
> --------------------------------------
>
>                 Key: HADOOP-901
>                 URL: https://issues.apache.org/jira/browse/HADOOP-901
>             Project: Hadoop
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.10.1
>            Reporter: Tom White
>             Fix For: 0.11.0
>
>         Attachments: hadoop-901.patch
>
>
> From Mike Smith:
> I went through the S3FileSystem.java codes and fixed the renameRaw() method.
> Now, it iterates through the folders recursively and rename those. Also, in
> the case of existing destination folder, it moves the src folder under the
> dst folder.
> Here is the piece code that should be replaced in S3FileSystem.java.
> renameRaw() method should be replaced by the following methods:
> @Override
>  public boolean renameRaw(Path src, Path dst) throws IOException {
>   Path absoluteDst = makeAbsolute(dst);
>   Path absoluteSrc = makeAbsolute(src);
>   INode inode = store.getINode(absoluteDst);
>   // checking to see of dst folder exist. In this case moves the
>   // src folder under the existing path.
>   if (inode != null && inode.isDirectory()) {
>    Path newDst = new Path(absoluteDst.toString
> ()+"/"+absoluteSrc.getName());
>    return renameRaw(src,newDst,src);
>   } else {
>   // if the dst folder does not exist, then the dst folder will be created.
>   return renameRaw(src,dst,src);
>   }
>  }
>  // recursively goes through all the subfolders and rename those.
>  public boolean renameRaw(Path src, Path dst,Path orgSrc) throws
> IOException {
>     Path absoluteSrc = makeAbsolute(src);
>     Path newDst = new Path(src.toString().replaceFirst(orgSrc.toString(),
> dst.toString()));
>     Path absoluteDst = makeAbsolute(newDst);
>     LOG.info(absoluteSrc.toString());
>     INode inode = store.getINode (absoluteSrc);
>     if (inode == null) {
>       return false;
>     }
>     if (inode.isFile()) {
>      store.storeINode(makeAbsolute(absoluteDst), inode);
>     } else {
>       store.storeINode (makeAbsolute(absoluteDst), inode);
>       Path[] contents = listPathsRaw(absoluteSrc);
>       if (contents == null) {
>         return false;
>       }
>       for (Path p : contents) {
>         if (! renameRaw(p,dst,orgSrc)) {
>           return false;
>         }
>       }
>     }
>     store.deleteINode(absoluteSrc);
>     return true;
> }

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-901) Make S3FileSystem do recursive renames

Posted by "James P. White (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12465859 ] 

James P. White commented on HADOOP-901:
---------------------------------------

I too am uncertain whether that code is correct, but surely the "/" should be replaced with File.fileSeparator.

But actually:

Path newDst = new Path(absoluteDst.toString()+File.fileSeparator+absoluteSrc.getName());
 
Should probably be:

Path newDst = new Path(absoluteDst, absoluteSrc.getName());



> Make S3FileSystem do recursive renames
> --------------------------------------
>
>                 Key: HADOOP-901
>                 URL: https://issues.apache.org/jira/browse/HADOOP-901
>             Project: Hadoop
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.10.1
>            Reporter: Tom White
>
> From Mike Smith:
> I went through the S3FileSystem.java codes and fixed the renameRaw() method.
> Now, it iterates through the folders recursively and rename those. Also, in
> the case of existing destination folder, it moves the src folder under the
> dst folder.
> Here is the piece code that should be replaced in S3FileSystem.java.
> renameRaw() method should be replaced by the following methods:
> @Override
>  public boolean renameRaw(Path src, Path dst) throws IOException {
>   Path absoluteDst = makeAbsolute(dst);
>   Path absoluteSrc = makeAbsolute(src);
>   INode inode = store.getINode(absoluteDst);
>   // checking to see of dst folder exist. In this case moves the
>   // src folder under the existing path.
>   if (inode != null && inode.isDirectory()) {
>    Path newDst = new Path(absoluteDst.toString
> ()+"/"+absoluteSrc.getName());
>    return renameRaw(src,newDst,src);
>   } else {
>   // if the dst folder does not exist, then the dst folder will be created.
>   return renameRaw(src,dst,src);
>   }
>  }
>  // recursively goes through all the subfolders and rename those.
>  public boolean renameRaw(Path src, Path dst,Path orgSrc) throws
> IOException {
>     Path absoluteSrc = makeAbsolute(src);
>     Path newDst = new Path(src.toString().replaceFirst(orgSrc.toString(),
> dst.toString()));
>     Path absoluteDst = makeAbsolute(newDst);
>     LOG.info(absoluteSrc.toString());
>     INode inode = store.getINode (absoluteSrc);
>     if (inode == null) {
>       return false;
>     }
>     if (inode.isFile()) {
>      store.storeINode(makeAbsolute(absoluteDst), inode);
>     } else {
>       store.storeINode (makeAbsolute(absoluteDst), inode);
>       Path[] contents = listPathsRaw(absoluteSrc);
>       if (contents == null) {
>         return false;
>       }
>       for (Path p : contents) {
>         if (! renameRaw(p,dst,orgSrc)) {
>           return false;
>         }
>       }
>     }
>     store.deleteINode(absoluteSrc);
>     return true;
> }

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Commented: (HADOOP-901) Make S3FileSystem do recursive renames

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12465852 ] 

Doug Cutting commented on HADOOP-901:
-------------------------------------

Can you please attach a patch file for this?  Thanks!

http://wiki.apache.org/lucene-hadoop/HowToContribute

Tom, does this look right to you?  I worry a bit about the manipulation of paths as strings, but haven't yet looked deeply at what this is doing.

> Make S3FileSystem do recursive renames
> --------------------------------------
>
>                 Key: HADOOP-901
>                 URL: https://issues.apache.org/jira/browse/HADOOP-901
>             Project: Hadoop
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.10.1
>            Reporter: Tom White
>
> From Mike Smith:
> I went through the S3FileSystem.java codes and fixed the renameRaw() method.
> Now, it iterates through the folders recursively and rename those. Also, in
> the case of existing destination folder, it moves the src folder under the
> dst folder.
> Here is the piece code that should be replaced in S3FileSystem.java.
> renameRaw() method should be replaced by the following methods:
> @Override
>  public boolean renameRaw(Path src, Path dst) throws IOException {
>   Path absoluteDst = makeAbsolute(dst);
>   Path absoluteSrc = makeAbsolute(src);
>   INode inode = store.getINode(absoluteDst);
>   // checking to see of dst folder exist. In this case moves the
>   // src folder under the existing path.
>   if (inode != null && inode.isDirectory()) {
>    Path newDst = new Path(absoluteDst.toString
> ()+"/"+absoluteSrc.getName());
>    return renameRaw(src,newDst,src);
>   } else {
>   // if the dst folder does not exist, then the dst folder will be created.
>   return renameRaw(src,dst,src);
>   }
>  }
>  // recursively goes through all the subfolders and rename those.
>  public boolean renameRaw(Path src, Path dst,Path orgSrc) throws
> IOException {
>     Path absoluteSrc = makeAbsolute(src);
>     Path newDst = new Path(src.toString().replaceFirst(orgSrc.toString(),
> dst.toString()));
>     Path absoluteDst = makeAbsolute(newDst);
>     LOG.info(absoluteSrc.toString());
>     INode inode = store.getINode (absoluteSrc);
>     if (inode == null) {
>       return false;
>     }
>     if (inode.isFile()) {
>      store.storeINode(makeAbsolute(absoluteDst), inode);
>     } else {
>       store.storeINode (makeAbsolute(absoluteDst), inode);
>       Path[] contents = listPathsRaw(absoluteSrc);
>       if (contents == null) {
>         return false;
>       }
>       for (Path p : contents) {
>         if (! renameRaw(p,dst,orgSrc)) {
>           return false;
>         }
>       }
>     }
>     store.deleteINode(absoluteSrc);
>     return true;
> }

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Updated: (HADOOP-901) Make S3FileSystem do recursive renames

Posted by "Tom White (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Tom White updated HADOOP-901:
-----------------------------

    Status: Patch Available  (was: Open)

Thanks Mike!

> Make S3FileSystem do recursive renames
> --------------------------------------
>
>                 Key: HADOOP-901
>                 URL: https://issues.apache.org/jira/browse/HADOOP-901
>             Project: Hadoop
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.10.1
>            Reporter: Tom White
>         Attachments: hadoop-901.patch
>
>
> From Mike Smith:
> I went through the S3FileSystem.java codes and fixed the renameRaw() method.
> Now, it iterates through the folders recursively and rename those. Also, in
> the case of existing destination folder, it moves the src folder under the
> dst folder.
> Here is the piece code that should be replaced in S3FileSystem.java.
> renameRaw() method should be replaced by the following methods:
> @Override
>  public boolean renameRaw(Path src, Path dst) throws IOException {
>   Path absoluteDst = makeAbsolute(dst);
>   Path absoluteSrc = makeAbsolute(src);
>   INode inode = store.getINode(absoluteDst);
>   // checking to see of dst folder exist. In this case moves the
>   // src folder under the existing path.
>   if (inode != null && inode.isDirectory()) {
>    Path newDst = new Path(absoluteDst.toString
> ()+"/"+absoluteSrc.getName());
>    return renameRaw(src,newDst,src);
>   } else {
>   // if the dst folder does not exist, then the dst folder will be created.
>   return renameRaw(src,dst,src);
>   }
>  }
>  // recursively goes through all the subfolders and rename those.
>  public boolean renameRaw(Path src, Path dst,Path orgSrc) throws
> IOException {
>     Path absoluteSrc = makeAbsolute(src);
>     Path newDst = new Path(src.toString().replaceFirst(orgSrc.toString(),
> dst.toString()));
>     Path absoluteDst = makeAbsolute(newDst);
>     LOG.info(absoluteSrc.toString());
>     INode inode = store.getINode (absoluteSrc);
>     if (inode == null) {
>       return false;
>     }
>     if (inode.isFile()) {
>      store.storeINode(makeAbsolute(absoluteDst), inode);
>     } else {
>       store.storeINode (makeAbsolute(absoluteDst), inode);
>       Path[] contents = listPathsRaw(absoluteSrc);
>       if (contents == null) {
>         return false;
>       }
>       for (Path p : contents) {
>         if (! renameRaw(p,dst,orgSrc)) {
>           return false;
>         }
>       }
>     }
>     store.deleteINode(absoluteSrc);
>     return true;
> }

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Commented: (HADOOP-901) Make S3FileSystem do recursive renames

Posted by "Tom White (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12466161 ] 

Tom White commented on HADOOP-901:
----------------------------------

There are a couple of things that I would change:

1. There are a number of additional checks that need performing before doing the rename. For example, checking if src exists.
2. It would be simpler and more efficient to use S3's ability to match all paths with a given prefix, rather than using listPathsRaw recursively. This entails adding a listDeepSubPaths method to FileSystemStore.

I'll create a patch to do this.

> Make S3FileSystem do recursive renames
> --------------------------------------
>
>                 Key: HADOOP-901
>                 URL: https://issues.apache.org/jira/browse/HADOOP-901
>             Project: Hadoop
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.10.1
>            Reporter: Tom White
>
> From Mike Smith:
> I went through the S3FileSystem.java codes and fixed the renameRaw() method.
> Now, it iterates through the folders recursively and rename those. Also, in
> the case of existing destination folder, it moves the src folder under the
> dst folder.
> Here is the piece code that should be replaced in S3FileSystem.java.
> renameRaw() method should be replaced by the following methods:
> @Override
>  public boolean renameRaw(Path src, Path dst) throws IOException {
>   Path absoluteDst = makeAbsolute(dst);
>   Path absoluteSrc = makeAbsolute(src);
>   INode inode = store.getINode(absoluteDst);
>   // checking to see of dst folder exist. In this case moves the
>   // src folder under the existing path.
>   if (inode != null && inode.isDirectory()) {
>    Path newDst = new Path(absoluteDst.toString
> ()+"/"+absoluteSrc.getName());
>    return renameRaw(src,newDst,src);
>   } else {
>   // if the dst folder does not exist, then the dst folder will be created.
>   return renameRaw(src,dst,src);
>   }
>  }
>  // recursively goes through all the subfolders and rename those.
>  public boolean renameRaw(Path src, Path dst,Path orgSrc) throws
> IOException {
>     Path absoluteSrc = makeAbsolute(src);
>     Path newDst = new Path(src.toString().replaceFirst(orgSrc.toString(),
> dst.toString()));
>     Path absoluteDst = makeAbsolute(newDst);
>     LOG.info(absoluteSrc.toString());
>     INode inode = store.getINode (absoluteSrc);
>     if (inode == null) {
>       return false;
>     }
>     if (inode.isFile()) {
>      store.storeINode(makeAbsolute(absoluteDst), inode);
>     } else {
>       store.storeINode (makeAbsolute(absoluteDst), inode);
>       Path[] contents = listPathsRaw(absoluteSrc);
>       if (contents == null) {
>         return false;
>       }
>       for (Path p : contents) {
>         if (! renameRaw(p,dst,orgSrc)) {
>           return false;
>         }
>       }
>     }
>     store.deleteINode(absoluteSrc);
>     return true;
> }

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Commented: (HADOOP-901) Make S3FileSystem do recursive renames

Posted by "Tom White (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12466364 ] 

Tom White commented on HADOOP-901:
----------------------------------

I've attached a patch which does some additional checks and uses a new method listDeepSubPaths to retrieve all subpaths in one S3 operation. It includes unit tests.

Note that, unlike Mike's code, a rename will fail (return false) if the parent directory of the destination does not exist, which I believe is consistent with HDFS. Mike - could you check that this patch works for your use case please?

> Make S3FileSystem do recursive renames
> --------------------------------------
>
>                 Key: HADOOP-901
>                 URL: https://issues.apache.org/jira/browse/HADOOP-901
>             Project: Hadoop
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.10.1
>            Reporter: Tom White
>         Attachments: hadoop-901.patch
>
>
> From Mike Smith:
> I went through the S3FileSystem.java codes and fixed the renameRaw() method.
> Now, it iterates through the folders recursively and rename those. Also, in
> the case of existing destination folder, it moves the src folder under the
> dst folder.
> Here is the piece code that should be replaced in S3FileSystem.java.
> renameRaw() method should be replaced by the following methods:
> @Override
>  public boolean renameRaw(Path src, Path dst) throws IOException {
>   Path absoluteDst = makeAbsolute(dst);
>   Path absoluteSrc = makeAbsolute(src);
>   INode inode = store.getINode(absoluteDst);
>   // checking to see of dst folder exist. In this case moves the
>   // src folder under the existing path.
>   if (inode != null && inode.isDirectory()) {
>    Path newDst = new Path(absoluteDst.toString
> ()+"/"+absoluteSrc.getName());
>    return renameRaw(src,newDst,src);
>   } else {
>   // if the dst folder does not exist, then the dst folder will be created.
>   return renameRaw(src,dst,src);
>   }
>  }
>  // recursively goes through all the subfolders and rename those.
>  public boolean renameRaw(Path src, Path dst,Path orgSrc) throws
> IOException {
>     Path absoluteSrc = makeAbsolute(src);
>     Path newDst = new Path(src.toString().replaceFirst(orgSrc.toString(),
> dst.toString()));
>     Path absoluteDst = makeAbsolute(newDst);
>     LOG.info(absoluteSrc.toString());
>     INode inode = store.getINode (absoluteSrc);
>     if (inode == null) {
>       return false;
>     }
>     if (inode.isFile()) {
>      store.storeINode(makeAbsolute(absoluteDst), inode);
>     } else {
>       store.storeINode (makeAbsolute(absoluteDst), inode);
>       Path[] contents = listPathsRaw(absoluteSrc);
>       if (contents == null) {
>         return false;
>       }
>       for (Path p : contents) {
>         if (! renameRaw(p,dst,orgSrc)) {
>           return false;
>         }
>       }
>     }
>     store.deleteINode(absoluteSrc);
>     return true;
> }

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Commented: (HADOOP-901) Make S3FileSystem do recursive renames

Posted by "Mike Smith (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12466197 ] 

Mike Smith commented on HADOOP-901:
-----------------------------------

I can committe this patch and it works fine. I have tested several cases to make sure it works fine. But, if you are going to committe a cleaned up version, then I'll wait for that one. 

> Make S3FileSystem do recursive renames
> --------------------------------------
>
>                 Key: HADOOP-901
>                 URL: https://issues.apache.org/jira/browse/HADOOP-901
>             Project: Hadoop
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.10.1
>            Reporter: Tom White
>
> From Mike Smith:
> I went through the S3FileSystem.java codes and fixed the renameRaw() method.
> Now, it iterates through the folders recursively and rename those. Also, in
> the case of existing destination folder, it moves the src folder under the
> dst folder.
> Here is the piece code that should be replaced in S3FileSystem.java.
> renameRaw() method should be replaced by the following methods:
> @Override
>  public boolean renameRaw(Path src, Path dst) throws IOException {
>   Path absoluteDst = makeAbsolute(dst);
>   Path absoluteSrc = makeAbsolute(src);
>   INode inode = store.getINode(absoluteDst);
>   // checking to see of dst folder exist. In this case moves the
>   // src folder under the existing path.
>   if (inode != null && inode.isDirectory()) {
>    Path newDst = new Path(absoluteDst.toString
> ()+"/"+absoluteSrc.getName());
>    return renameRaw(src,newDst,src);
>   } else {
>   // if the dst folder does not exist, then the dst folder will be created.
>   return renameRaw(src,dst,src);
>   }
>  }
>  // recursively goes through all the subfolders and rename those.
>  public boolean renameRaw(Path src, Path dst,Path orgSrc) throws
> IOException {
>     Path absoluteSrc = makeAbsolute(src);
>     Path newDst = new Path(src.toString().replaceFirst(orgSrc.toString(),
> dst.toString()));
>     Path absoluteDst = makeAbsolute(newDst);
>     LOG.info(absoluteSrc.toString());
>     INode inode = store.getINode (absoluteSrc);
>     if (inode == null) {
>       return false;
>     }
>     if (inode.isFile()) {
>      store.storeINode(makeAbsolute(absoluteDst), inode);
>     } else {
>       store.storeINode (makeAbsolute(absoluteDst), inode);
>       Path[] contents = listPathsRaw(absoluteSrc);
>       if (contents == null) {
>         return false;
>       }
>       for (Path p : contents) {
>         if (! renameRaw(p,dst,orgSrc)) {
>           return false;
>         }
>       }
>     }
>     store.deleteINode(absoluteSrc);
>     return true;
> }

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Updated: (HADOOP-901) Make S3FileSystem do recursive renames

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Doug Cutting updated HADOOP-901:
--------------------------------

       Resolution: Fixed
    Fix Version/s: 0.11.0
           Status: Resolved  (was: Patch Available)

I just committed this.  Thanks, Tom!

> Make S3FileSystem do recursive renames
> --------------------------------------
>
>                 Key: HADOOP-901
>                 URL: https://issues.apache.org/jira/browse/HADOOP-901
>             Project: Hadoop
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.10.1
>            Reporter: Tom White
>             Fix For: 0.11.0
>
>         Attachments: hadoop-901.patch
>
>
> From Mike Smith:
> I went through the S3FileSystem.java codes and fixed the renameRaw() method.
> Now, it iterates through the folders recursively and rename those. Also, in
> the case of existing destination folder, it moves the src folder under the
> dst folder.
> Here is the piece code that should be replaced in S3FileSystem.java.
> renameRaw() method should be replaced by the following methods:
> @Override
>  public boolean renameRaw(Path src, Path dst) throws IOException {
>   Path absoluteDst = makeAbsolute(dst);
>   Path absoluteSrc = makeAbsolute(src);
>   INode inode = store.getINode(absoluteDst);
>   // checking to see of dst folder exist. In this case moves the
>   // src folder under the existing path.
>   if (inode != null && inode.isDirectory()) {
>    Path newDst = new Path(absoluteDst.toString
> ()+"/"+absoluteSrc.getName());
>    return renameRaw(src,newDst,src);
>   } else {
>   // if the dst folder does not exist, then the dst folder will be created.
>   return renameRaw(src,dst,src);
>   }
>  }
>  // recursively goes through all the subfolders and rename those.
>  public boolean renameRaw(Path src, Path dst,Path orgSrc) throws
> IOException {
>     Path absoluteSrc = makeAbsolute(src);
>     Path newDst = new Path(src.toString().replaceFirst(orgSrc.toString(),
> dst.toString()));
>     Path absoluteDst = makeAbsolute(newDst);
>     LOG.info(absoluteSrc.toString());
>     INode inode = store.getINode (absoluteSrc);
>     if (inode == null) {
>       return false;
>     }
>     if (inode.isFile()) {
>      store.storeINode(makeAbsolute(absoluteDst), inode);
>     } else {
>       store.storeINode (makeAbsolute(absoluteDst), inode);
>       Path[] contents = listPathsRaw(absoluteSrc);
>       if (contents == null) {
>         return false;
>       }
>       for (Path p : contents) {
>         if (! renameRaw(p,dst,orgSrc)) {
>           return false;
>         }
>       }
>     }
>     store.deleteINode(absoluteSrc);
>     return true;
> }

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.