You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2022/02/17 13:49:48 UTC

[GitHub] [hadoop] ahmarsuhail commented on a change in pull request #3978: HADOOP-13704. Optimised getContentSummary()

ahmarsuhail commented on a change in pull request #3978:
URL: https://github.com/apache/hadoop/pull/3978#discussion_r809067257



##########
File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/GetContentSummaryOperation.java
##########
@@ -133,34 +133,63 @@ public ContentSummary execute() throws IOException {
    * @throws IOException failure
    */
   public ContentSummary getDirSummary(Path dir) throws IOException {
+
     long totalLength = 0;
     long fileCount = 0;
     long dirCount = 1;
-    final RemoteIterator<S3AFileStatus> it
-        = callbacks.listStatusIterator(dir);
+
+    RemoteIterator<S3ALocatedFileStatus> it = callbacks.listFilesIterator(dir, true);
+
+    Set<Path> dirSet = new HashSet<>();
+    Set<Path> pathsTraversed = new HashSet<>();
 
     while (it.hasNext()) {
-      final S3AFileStatus s = it.next();
-      if (s.isDirectory()) {
-        try {
-          ContentSummary c = getDirSummary(s.getPath());
-          totalLength += c.getLength();
-          fileCount += c.getFileCount();
-          dirCount += c.getDirectoryCount();
-        } catch (FileNotFoundException ignored) {
-          // path was deleted during the scan; exclude from
-          // summary.
-        }
-      } else {
-        totalLength += s.getLen();
+      S3ALocatedFileStatus fileStatus = it.next();
+      Path filePath = fileStatus.getPath();
+
+      if (fileStatus.isDirectory() && !filePath.equals(dir)) {
+        dirSet.add(filePath);
+        buildDirectorySet(dirSet, pathsTraversed, dir, filePath.getParent());
+      } else if (!fileStatus.isDirectory()) {
         fileCount += 1;
+        totalLength += fileStatus.getLen();
+        buildDirectorySet(dirSet, pathsTraversed, dir, filePath.getParent());
       }
+
     }
+
     // Add the list's IOStatistics
     iostatistics.aggregate(retrieveIOStatistics(it));
+
     return new ContentSummary.Builder().length(totalLength).
-        fileCount(fileCount).directoryCount(dirCount).
-        spaceConsumed(totalLength).build();
+            fileCount(fileCount).directoryCount(dirCount + dirSet.size()).
+            spaceConsumed(totalLength).build();
+  }
+
+  /***
+   * This method builds the set of all directories found under the base path. We need to do this because if the
+   * directory structure /a/b/c was created with a single mkdirs() call, it is stored as 1 object in S3 and the list
+   * files iterator will only return a single entry /a/b/c.
+   *
+   * We keep track of paths traversed so far to prevent duplication of work. For eg, if we had a/b/c/file-1.txt and
+   * /a/b/c/file-2.txt, we will only recurse over the complete path once and won't have to do anything for file-2.txt.
+   *
+   * @param dirSet Set of all directories found in the path
+   * @param pathsTraversed Set of all paths traversed so far
+   * @param basePath Path of directory to scan
+   * @param parentPath Parent path of the current file/directory in the iterator
+   */
+  private void buildDirectorySet(Set<Path> dirSet, Set<Path> pathsTraversed, Path basePath, Path parentPath) {
+
+    if (parentPath == null || pathsTraversed.contains(parentPath) || parentPath.equals(basePath)) {

Review comment:
       In most cases (eg: nested directories a/b/c.txt, a/b/d.txt) the contains condition will happen a lot more often than parentPath, so I think this order is probably the fastest 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org