You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Mike Smith (JIRA)" <ji...@apache.org> on 2007/03/04 23:30:50 UTC

[jira] Commented: (HADOOP-1061) S3 listSubPaths bug

    [ https://issues.apache.org/jira/browse/HADOOP-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12477874 ] 

Mike Smith commented on HADOOP-1061:
------------------------------------

As a temporary solution I replaced the listSubPath method of Jets3tFileSystemStore class with this new mathod, which simply check if there is any tail after delimiter and also does not pass zero to s3Service.listObjects:

public Set<Path> listSubPaths(Path path) throws IOException {
    try {
      String prefix = pathToKey(path);
      if (!prefix.endsWith(PATH_DELIMITER)) {
        prefix += PATH_DELIMITER;
      }
      S3Object[] objects = s3Service.listObjects(bucket, prefix, PATH_DELIMITER);
      Set<Path> prefixes = new TreeSet<Path>();
      for (int i = 0; i < objects.length; i++) {
    	if(objects[i].getKey().replaceAll(prefix,"").indexOf(PATH_DELIMITER)<0)    	
        prefixes.add(keyToPath(objects[i].getKey()));
      }
      prefixes.remove(path);
      return prefixes;
    } catch (S3ServiceException e) {
      if (e.getCause() instanceof IOException) {
        throw (IOException) e.getCause();
      }
      throw new S3Exception(e);
    }
  }

> S3 listSubPaths bug
> -------------------
>
>                 Key: HADOOP-1061
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1061
>             Project: Hadoop
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.11.2, 0.12.0
>            Reporter: Mike Smith
>            Priority: Critical
>
> I had problem with the -ls command in s3 file system. It was returning inconsistence number of "Found Items" if you rerun it different times and more importantly it returns recursive results (depth 1) for some folders. 
> I looked into the code, the problem is caused by jets3t library. The inconsistency problem will be solved if we use :
> S3Object[] objects = s3Service.listObjects(bucket, prefix, PATH_DELIMITER);
> instead of 
> S3Object[] objects = s3Service.listObjects(bucket, prefix, PATH_DELIMITER , 0);
> in listSubPaths of Jets3tFileSystemStore class (line 227)! This change will let GET REST request to have a "max-key" paramter with default value of 1000! It seems s3 GET request is sensetive to this paramater! 
> But, the recursive problem is because the GET  request doesn't execute the delimiter constraint correctly. The response contains all the keys with the given prefix but they don't stop at the path_delimiter. You can simply test this by making couple folder on hadoop s3 filesystem and run -ls. I followed the generated GET request and it looks all fine but it is not executed correctly at the s3 server side.I still don't know why the response doesn't stop at the path_delimiter. 
> Possible casue: Jets3t library does URL encoding, why do we need to do URL encoding in Jets3tFileSystemStore class!?
> example:
> Original path is   /user/root/folder  and it will be encoded to %2Fuser%2Froot%2Ffolder is Jets3tFileSystemStore class. Then, Jets3t will reencode this to make the REST request. And it will be rewritten as %252Fuser%252Froot%252Ffolder, so the the generated folder on the S3 will be %2Fuser%2Froot%2Ffolder after decoding at the amazon side. Wouldn't be better to skip the encoding part on Hadoop. This strange structure might be the reason that the s3 doesn't stop at the path_delimiter. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.