You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2021/03/05 10:40:20 UTC

[GitHub] [hadoop] ayushtkn commented on a change in pull request #2746: HDFS-15875. Check whether file is being truncated before truncate

ayushtkn commented on a change in pull request #2746:
URL: https://github.com/apache/hadoop/pull/2746#discussion_r588195715



##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
##########
@@ -218,6 +219,65 @@ public void testSnapshotTruncateThenDeleteSnapshot() throws IOException {
     fs.delete(dir, true);
   }
 
+
+  /**
+   * Test truncate twice together on a file
+   */
+  @Test(timeout=90000)
+  public void testTruncateTwiceTogether() throws Exception {
+
+    Path dir = new Path("/testTruncateTwiceTogether");
+    fs.mkdirs(dir);
+    final Path p = new Path(dir, "file");
+    final byte[] data = new byte[100 * BLOCK_SIZE];
+    ThreadLocalRandom.current().nextBytes(data);
+    writeContents(data, data.length, p);
+
+    DataNodeFaultInjector originInjector = DataNodeFaultInjector.get();

Review comment:
       Is this used somewhere?

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
##########
@@ -218,6 +219,65 @@ public void testSnapshotTruncateThenDeleteSnapshot() throws IOException {
     fs.delete(dir, true);
   }
 
+
+  /**
+   * Test truncate twice together on a file
+   */
+  @Test(timeout=90000)
+  public void testTruncateTwiceTogether() throws Exception {
+
+    Path dir = new Path("/testTruncateTwiceTogether");
+    fs.mkdirs(dir);
+    final Path p = new Path(dir, "file");
+    final byte[] data = new byte[100 * BLOCK_SIZE];
+    ThreadLocalRandom.current().nextBytes(data);
+    writeContents(data, data.length, p);
+
+    DataNodeFaultInjector originInjector = DataNodeFaultInjector.get();
+    DataNodeFaultInjector injector = new DataNodeFaultInjector() {
+      @Override
+      public void delay() {
+        try {
+          // Bigger than soft lease period.
+          Thread.sleep(65000);
+        } catch (InterruptedException e) {
+          e.printStackTrace();

Review comment:
       No need to print the trace, if possible add Exception to method signature and remove this try-catch, else ignore the exception

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
##########
@@ -218,6 +219,65 @@ public void testSnapshotTruncateThenDeleteSnapshot() throws IOException {
     fs.delete(dir, true);
   }
 
+
+  /**
+   * Test truncate twice together on a file
+   */
+  @Test(timeout=90000)
+  public void testTruncateTwiceTogether() throws Exception {
+
+    Path dir = new Path("/testTruncateTwiceTogether");
+    fs.mkdirs(dir);
+    final Path p = new Path(dir, "file");
+    final byte[] data = new byte[100 * BLOCK_SIZE];
+    ThreadLocalRandom.current().nextBytes(data);
+    writeContents(data, data.length, p);
+
+    DataNodeFaultInjector originInjector = DataNodeFaultInjector.get();
+    DataNodeFaultInjector injector = new DataNodeFaultInjector() {
+      @Override
+      public void delay() {
+        try {
+          // Bigger than soft lease period.
+          Thread.sleep(65000);
+        } catch (InterruptedException e) {
+          e.printStackTrace();
+        }
+      }
+    };
+    // Delay to recovery.
+    DataNodeFaultInjector.set(injector);
+
+    // Truncate by using different client name.
+    Thread t = new Thread(() ->
+        {
+          String hdfsCacheDisableKey = "fs.hdfs.impl.disable.cache";
+          boolean originCacheDisable =
+              conf.getBoolean(hdfsCacheDisableKey, false);
+          try {
+            conf.setBoolean(hdfsCacheDisableKey, true);
+            FileSystem fs1 = FileSystem.get(conf);
+            fs1.truncate(p, data.length-1);
+            } catch (IOException e) {
+              // ignore
+            } finally{
+              conf.setBoolean(hdfsCacheDisableKey, originCacheDisable);
+            }
+        });
+    t.start();
+    t.join();
+    Thread.sleep(60000);
+    try {
+      fs.truncate(p, data.length - 2);
+    } catch (IOException e) {
+      //GenericTestUtils.assertExceptionContains("is being truncated.", e);
+    }

Review comment:
       Can use LambdaTestUtils.
   
   ```
       LambdaTestUtils.intercept(RemoteException.class,
           "/testTruncateTwiceTogether/file is being truncated",
           () -> fs.truncate(p, data.length - 2));
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org