You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2022/09/14 06:11:48 UTC

[GitHub] [hadoop] mehakmeet commented on a diff in pull request #4842: HADOOP-16769. LocalDirAllocator to provide diagnostics when file creation fails

mehakmeet commented on code in PR #4842:
URL: https://github.com/apache/hadoop/pull/4842#discussion_r970310907


##########
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java:
##########
@@ -396,6 +396,9 @@ public Path getLocalPathForWrite(String pathStr, long size,
       Context ctx = confChanged(conf);
       int numDirs = ctx.localDirs.length;
       int numDirsSearched = 0;
+      long maxCapacity = 0;

Review Comment:
   comment for a better description of this variable, like "Max capacity in any directory"...



##########
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalDirAllocator.java:
##########
@@ -532,4 +533,19 @@ public void testGetLocalPathForWriteForInvalidPaths() throws Exception {
     }
   }
 
+  /**
+   * Test to check the LocalDirAllocation for the less space HADOOP-16769.
+   *
+   * @throws Exception
+   */
+  @Test(timeout = 30000)
+  public void testGetLocalPathForWriteForLessSpace() throws Exception {
+    String dir0 = buildBufferDir(ROOT, 0);
+    String dir1 = buildBufferDir(ROOT, 1);
+    conf.set(CONTEXT, dir0 + "," + dir1);
+    LambdaTestUtils.intercept(DiskErrorException.class, "as the max capacity in any directory is",

Review Comment:
   use `String.format()` to include the path of the file and size in the 2nd argument for the contained string message to verify if the path and size is being propagated in the error message.



##########
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalDirAllocator.java:
##########
@@ -532,4 +533,19 @@ public void testGetLocalPathForWriteForInvalidPaths() throws Exception {
     }
   }
 
+  /**
+   * Test to check the LocalDirAllocation for the less space HADOOP-16769.

Review Comment:
   nit: Maybe cut the Hadoop Jira, and write what we are doing in the test, like "Test to verify creating files using LocalDirAllocation with file size exceeding any directory capacity"...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org