You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2022/12/01 03:28:51 UTC

[GitHub] [hadoop] pranavsaxena-microsoft opened a new pull request, #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

pranavsaxena-microsoft opened a new pull request, #5176:
URL: https://github.com/apache/hadoop/pull/5176

   JIRA: https://issues.apache.org/jira/browse/HADOOP-18546
   **Details:**
   AbfsInputStream.close() can trigger the return of buffers used for active prefetch GET requests into the ReadBufferManager free buffer pool.
   
   A subsequent prefetch by a different stream in the same process may acquire this same buffer. This can lead to risk of corruption of its own prefetched data, data which may then be returned to that other thread.
   Parent JIRA: https://issues.apache.org/jira/browse/HADOOP-18521
   
   In this PR, we are disabling the purging of the inprogressList. The readBuffers in InProgressList will get to ReadBufferWorker and get processed and finally get into completedList. After a thresholdAgeMilliseconds, the readBuffer would be evicted (https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java#L280-L285)
   
   **Testing:**
   https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java#L280-L285


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
steveloughran commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1343192257

   it's a race condition in the test, which is why you didn't see it...different machine, network etc. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1334095330

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   1m  8s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 2 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 17s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  30m 45s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  24m 42s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   5m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 43s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 10s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m 46s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 27s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m 51s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  25m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 13s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  23m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   4m 21s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/7/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 12 new + 1 unchanged - 0 fixed = 13 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   3m  2s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 14s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 53s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 58s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 51s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 31s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 10s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 266m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/7/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5176 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux 71d5ba7ac2ad 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 02d39ca453c35cfe69c7c78ed3fcae00c7211615 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/7/testReport/ |
   | Max. process+thread count | 1414 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/7/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] pranavsaxena-microsoft commented on a diff in pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
pranavsaxena-microsoft commented on code in PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#discussion_r1037840780


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##########
@@ -495,6 +509,199 @@ public void testSuccessfulReadAhead() throws Exception {
     checkEvictedStatus(inputStream, 0, true);
   }
 
+  /**
+   * This test expects InProgressList is not purged by the inputStream close.
+   * The readBuffer will move to completedList and then finally should get evicted.
+   */
+  @Test
+  public void testStreamPurgeDuringReadAheadCallExecuting() throws Exception {
+    AbfsClient client = getMockAbfsClient();
+    AbfsRestOperation successOp = getMockRestOp();
+
+    final AtomicInteger movedToInProgressList = new AtomicInteger(0);
+    final AtomicInteger movedToCompletedList = new AtomicInteger(0);
+    final AtomicBoolean preClosedAssertion = new AtomicBoolean(false);
+
+    Mockito.doAnswer(invocationOnMock -> {
+          movedToInProgressList.incrementAndGet();
+          while (movedToInProgressList.get() < 3 || !preClosedAssertion.get()) {
+
+          }
+          movedToCompletedList.incrementAndGet();
+          return successOp;
+        })
+        .when(client)

Review Comment:
   Thanks. I have removed the assertion on inProgress to completedList and the eviction.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] pranavsaxena-microsoft commented on a diff in pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
pranavsaxena-microsoft commented on code in PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#discussion_r1037840233


##########
hadoop-common-project/hadoop-common/src/main/resources/core-default.xml:
##########
@@ -2166,13 +2166,6 @@ The switch to turn S3A auditing on or off.
   <description>The AbstractFileSystem for gs: uris.</description>
 </property>
 
-<property>
-  <name>fs.azure.enable.readahead</name>
-  <value>false</value>

Review Comment:
   Have put the value as "true" in new revision.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on a diff in pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
steveloughran commented on code in PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#discussion_r1040769983


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##########
@@ -524,30 +527,33 @@ public void testStreamPurgeDuringReadAheadCallExecuting() throws Exception {
     final ReadBufferManager readBufferManager
         = ReadBufferManager.getBufferManager();
 
+    final int readBufferTotal = readBufferManager.getNumBuffers();
+
     //Sleeping to give ReadBufferWorker to pick the readBuffers for processing.
-    Thread.sleep(1_000L);
+    Thread.sleep(readBufferTransferToInProgressProbableTime);
 
     Assertions.assertThat(readBufferManager.getInProgressCopiedList())
-        .describedAs("InProgressList should have 3 elements")
-        .hasSize(3);
+        .describedAs("InProgressList should have " + readBufferQueuedCount + " elements")
+        .hasSize(readBufferQueuedCount);
+    final int freeListBufferCount = readBufferTotal - readBufferQueuedCount;
     Assertions.assertThat(readBufferManager.getFreeListCopy())
-        .describedAs("FreeList should have 13 elements")
-        .hasSize(13);
+        .describedAs("FreeList should have " + freeListBufferCount + "elements")

Review Comment:
   you can actually use string.format patterns here; most relevant for on demand toString calls which are more expensive. I'm not worrying about it here though



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] pranavsaxena-microsoft commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
pranavsaxena-microsoft commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1339119467

   > one final change; the cleanup of the input stream in the test.
   > 
   > giving a +1 pending that, and I'm going to test this through spark today ... writing a test to do replicate the failure and then verify that all is good when the jar is update
   
   Thanks. We are doing inputStream.close() at https://github.com/apache/hadoop/pull/5176/files#diff-bdc464e1bfa3d270e552bdf740fc29ec808be9ab2c4f77a99bf896ac605a5698R546. Kindly advise please what is expected from the inputStream cleanup. I agree to the comment for String.format, I shall refactor the code accordingly.
   
   Regards.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] pranavsaxena-microsoft commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
pranavsaxena-microsoft commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1333736063

   > ReadAhead feature can reenabled back by default as we are undoing the known problem in the corruption issue reported before. Please include the change into this PR.
   > 
   > Also have some comments on tests. Please take a look.
   
   Commit https://github.com/apache/hadoop/commit/69e50c7b4499bffc1eb372799ccba3f26c5fe54e ([HADOOP-18528](https://issues.apache.org/jira/browse/HADOOP-18528). Disable abfs prefetching by default (https://github.com/apache/hadoop/pull/5134)) is reverted in the PR on commit: https://github.com/apache/hadoop/pull/5176/commits/02d39ca453c35cfe69c7c78ed3fcae00c7211615.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on a diff in pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
steveloughran commented on code in PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#discussion_r1038058686


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##########
@@ -495,6 +509,199 @@ public void testSuccessfulReadAhead() throws Exception {
     checkEvictedStatus(inputStream, 0, true);
   }
 
+  /**
+   * This test expects InProgressList is not purged by the inputStream close.
+   * The readBuffer will move to completedList and then finally should get evicted.
+   */
+  @Test
+  public void testStreamPurgeDuringReadAheadCallExecuting() throws Exception {
+    AbfsClient client = getMockAbfsClient();
+    AbfsRestOperation successOp = getMockRestOp();
+
+    final AtomicInteger movedToInProgressList = new AtomicInteger(0);
+    final AtomicInteger movedToCompletedList = new AtomicInteger(0);
+    final AtomicBoolean preClosedAssertion = new AtomicBoolean(false);
+
+    Mockito.doAnswer(invocationOnMock -> {
+          movedToInProgressList.incrementAndGet();
+          while (movedToInProgressList.get() < 3 || !preClosedAssertion.get()) {
+
+          }
+          movedToCompletedList.incrementAndGet();
+          return successOp;
+        })
+        .when(client)

Review Comment:
   it may be slow, but at least there's no assertion that something finishes *before* a specific timeout. those are the tests which really have problems on slow networks/overloaded systems



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] sreeb-msft commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
sreeb-msft commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1333917331

   Changes and simpler tests look okay to me. Approving once the yetus build results are here.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1335111159

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 2 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 10s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  29m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 28s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  24m 25s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 20s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 39s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m  9s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 22s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  26m 48s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  26m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m  6s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  24m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   4m 26s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/10/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   3m  1s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 32s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 56s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 26s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 55s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 39s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 17s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 261m 55s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/10/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5176 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux 6719cd114751 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 64546a58344cb11f8f7f50e6bdb5ff6af6965b9f |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/10/testReport/ |
   | Max. process+thread count | 2356 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/10/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1340942400

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |  11m 22s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  2s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  2s |  |  detect-secrets was not available.  |
   | +0 :ok: |  xmllint  |   0m  2s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 2 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  22m 55s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  48m 23s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  77m  2s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  | 101m 22s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   7m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   6m 48s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |  12m 19s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   6m  6s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   9m 56s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |  49m 43s |  |  branch has errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 41s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  54m 35s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  54m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  47m 12s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  47m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 48s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   3m 14s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 20s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m 13s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  30m 39s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  21m 12s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 11s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 538m 11s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/12/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5176 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux 5f08ec4036c1 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c76f610e7c3f04f300f3f3e8dc005700959c2e0a |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/12/testReport/ |
   | Max. process+thread count | 3137 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/12/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on a diff in pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
steveloughran commented on code in PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#discussion_r1041097514


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##########
@@ -495,6 +499,63 @@ public void testSuccessfulReadAhead() throws Exception {
     checkEvictedStatus(inputStream, 0, true);
   }
 
+  /**
+   * This test expects InProgressList is not purged by the inputStream close.
+   */
+  @Test
+  public void testStreamPurgeDuringReadAheadCallExecuting() throws Exception {
+    AbfsClient client = getMockAbfsClient();
+    AbfsRestOperation successOp = getMockRestOp();
+    final Long serverCommunicationMockLatency = 3_000L;
+    final Long readBufferTransferToInProgressProbableTime = 1_000L;
+    final Integer readBufferQueuedCount = 3;
+
+    Mockito.doAnswer(invocationOnMock -> {
+          //sleeping thread to mock the network latency from client to backend.
+          Thread.sleep(serverCommunicationMockLatency);
+          return successOp;
+        })
+        .when(client)
+        .read(any(String.class), any(Long.class), any(byte[].class),
+            any(Integer.class), any(Integer.class), any(String.class),
+            any(String.class), any(TracingContext.class));
+
+    AbfsInputStream inputStream = getAbfsInputStream(client,
+        "testSuccessfulReadAhead.txt");
+    queueReadAheads(inputStream);
+
+    final ReadBufferManager readBufferManager
+        = ReadBufferManager.getBufferManager();
+
+    final int readBufferTotal = readBufferManager.getNumBuffers();
+
+    //Sleeping to give ReadBufferWorker to pick the readBuffers for processing.
+    Thread.sleep(readBufferTransferToInProgressProbableTime);
+
+    Assertions.assertThat(readBufferManager.getInProgressCopiedList())
+        .describedAs("InProgressList should have " + readBufferQueuedCount + " elements")
+        .hasSize(readBufferQueuedCount);
+    final int freeListBufferCount = readBufferTotal - readBufferQueuedCount;
+    Assertions.assertThat(readBufferManager.getFreeListCopy())
+        .describedAs("FreeList should have " + freeListBufferCount + "elements")
+        .hasSize(freeListBufferCount);
+    Assertions.assertThat(readBufferManager.getCompletedReadListCopy())
+        .describedAs("CompletedList should have 0 elements")
+        .hasSize(0);
+
+    inputStream.close();

Review Comment:
   the problem with the close() here is that it will only be reached if the assertions hold. if anything goes wrong, an exception is raised and the stream kept open, with whatever resources it consumes.
   
   it should be closed in a finally block *or* the stream opened in a try-with-resources clause. thanks



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] snvijaya commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
snvijaya commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1335245329

   > happy with all the production code; just tuning tests.
   > 
   > now, has anyone tried a spark standalone cluster with a build of hadoop trunk without then with this patch to verify all is good there?
   > 
   > i can do this, but it is good for others to try too
   
   Hi Steve, Unfortunately we didn't get to the standalone cluster checks. We will try the setup and test on Monday. Thanks.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] snvijaya commented on a diff in pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
snvijaya commented on code in PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#discussion_r1037025160


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##########
@@ -495,6 +509,199 @@ public void testSuccessfulReadAhead() throws Exception {
     checkEvictedStatus(inputStream, 0, true);
   }
 
+  /**
+   * This test expects InProgressList is not purged by the inputStream close.
+   * The readBuffer will move to completedList and then finally should get evicted.
+   */
+  @Test
+  public void testStreamPurgeDuringReadAheadCallExecuting() throws Exception {
+    AbfsClient client = getMockAbfsClient();
+    AbfsRestOperation successOp = getMockRestOp();
+
+    final AtomicInteger movedToInProgressList = new AtomicInteger(0);
+    final AtomicInteger movedToCompletedList = new AtomicInteger(0);
+    final AtomicBoolean preClosedAssertion = new AtomicBoolean(false);
+
+    Mockito.doAnswer(invocationOnMock -> {
+          movedToInProgressList.incrementAndGet();
+          while (movedToInProgressList.get() < 3 || !preClosedAssertion.get()) {
+
+          }
+          movedToCompletedList.incrementAndGet();
+          return successOp;
+        })
+        .when(client)

Review Comment:
   The test is trying to unit test a bigger scope of existing inprogress buffer moving to completed list. Will be nice to scope the test to inProgressList and freelist counts, before and after close. 
   
   At this client.read() mock, I would suggest mocks that will invoke a large sleep for each read. That way after queueReadAheads call and a 1 sec sleep, 3 buffers will be stuck inProgessList and the freeeList should show 13 free. The asserts should continue to hold to same numbers post close as well.



##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##########
@@ -495,6 +509,199 @@ public void testSuccessfulReadAhead() throws Exception {
     checkEvictedStatus(inputStream, 0, true);
   }
 
+  /**
+   * This test expects InProgressList is not purged by the inputStream close.
+   * The readBuffer will move to completedList and then finally should get evicted.
+   */
+  @Test
+  public void testStreamPurgeDuringReadAheadCallExecuting() throws Exception {
+    AbfsClient client = getMockAbfsClient();
+    AbfsRestOperation successOp = getMockRestOp();
+
+    final AtomicInteger movedToInProgressList = new AtomicInteger(0);
+    final AtomicInteger movedToCompletedList = new AtomicInteger(0);
+    final AtomicBoolean preClosedAssertion = new AtomicBoolean(false);
+
+    Mockito.doAnswer(invocationOnMock -> {
+          movedToInProgressList.incrementAndGet();
+          while (movedToInProgressList.get() < 3 || !preClosedAssertion.get()) {
+
+          }
+          movedToCompletedList.incrementAndGet();
+          return successOp;
+        })
+        .when(client)
+        .read(any(String.class), any(Long.class), any(byte[].class),
+            any(Integer.class), any(Integer.class), any(String.class),
+            any(String.class), any(TracingContext.class));
+
+    AbfsInputStream inputStream = getAbfsInputStream(client,
+        "testSuccessfulReadAhead.txt");
+    queueReadAheads(inputStream);
+
+    final ReadBufferManager readBufferManager
+        = ReadBufferManager.getBufferManager();
+    while (movedToInProgressList.get() < 3) {
+
+    }
+    Assertions.assertThat(
+            getStreamRelatedBufferCount(readBufferManager.getInProgressCopiedList(),
+                inputStream))
+        .describedAs("InProgressList should have 3 elements")
+        .isEqualTo(3);
+    Assertions.assertThat(getStreamRelatedBufferCount(
+            readBufferManager.getCompletedReadListCopy(), inputStream))
+        .describedAs("CompletedList should have 3 elements")
+        .isEqualTo(0);
+
+    inputStream.close();
+
+    Assertions.assertThat(
+            getStreamRelatedBufferCount(readBufferManager.getInProgressCopiedList(),
+                inputStream))
+        .describedAs("InProgressList should have 3 elements")
+        .isEqualTo(3);
+    Assertions.assertThat(getStreamRelatedBufferCount(
+            readBufferManager.getCompletedReadListCopy(), inputStream))
+        .describedAs("CompletedList should have 3 elements")
+        .isEqualTo(0);
+    preClosedAssertion.set(true);
+
+    while (movedToCompletedList.get() < 3) {
+
+    }
+
+    //Sleep so that response from mockedClient gets back to ReadBufferWorker and
+    // can populate into completedList.
+    Thread.sleep(10000l);
+
+    Assertions.assertThat(getStreamRelatedBufferCount(
+            readBufferManager.getCompletedReadListCopy(), inputStream))
+        .describedAs("CompletedList should have 3 elements")
+        .isEqualTo(3);
+
+    Thread.sleep(readBufferManager.getThresholdAgeMilliseconds());
+
+    readBufferManager.callTryEvict();
+    readBufferManager.callTryEvict();
+    readBufferManager.callTryEvict();
+
+    Assertions.assertThat(getStreamRelatedBufferCount(
+            readBufferManager.getCompletedReadListCopy(), inputStream))
+        .describedAs("CompletedList should have 0 elements")
+        .isEqualTo(0);
+  }
+
+
+  /**
+   * This test expects InProgressList is not purged by the inputStream close.
+   * The already readBuffer present in the completedList shall be purged by the
+   * inputStream close.
+   * The readBuffer from inProgressList will move to completedList and then
+   * finally should get evicted.
+   */
+  @Test
+  public void testStreamPurgeDuringReadAheadCallExecutingWithSomeCompletedBuffers()

Review Comment:
   This test seems to be validating effect of purge on completedList.  Does this validate any test scenario that is not already covered in HADOOP-17156 commit testcases ? 
   
   Also, do all test asserts by HADOOP-17156 still hold good after this PR change preventing inprogress list purge ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] snvijaya commented on a diff in pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
snvijaya commented on code in PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#discussion_r1037798793


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##########
@@ -495,6 +509,199 @@ public void testSuccessfulReadAhead() throws Exception {
     checkEvictedStatus(inputStream, 0, true);
   }
 
+  /**
+   * This test expects InProgressList is not purged by the inputStream close.
+   * The readBuffer will move to completedList and then finally should get evicted.
+   */
+  @Test
+  public void testStreamPurgeDuringReadAheadCallExecuting() throws Exception {
+    AbfsClient client = getMockAbfsClient();
+    AbfsRestOperation successOp = getMockRestOp();
+
+    final AtomicInteger movedToInProgressList = new AtomicInteger(0);
+    final AtomicInteger movedToCompletedList = new AtomicInteger(0);
+    final AtomicBoolean preClosedAssertion = new AtomicBoolean(false);
+
+    Mockito.doAnswer(invocationOnMock -> {
+          movedToInProgressList.incrementAndGet();
+          while (movedToInProgressList.get() < 3 || !preClosedAssertion.get()) {
+
+          }
+          movedToCompletedList.incrementAndGet();
+          return successOp;
+        })
+        .when(client)

Review Comment:
   Hi Steve, the sleep time on these mock threads are meant to hold the thread blocked while the test goes ahead with asserts after queuing reads and asserts after close. The sleep of 1 second (which will block the main test thread) after queueing reads has been consistent with the timing expectations with pre-existing tests in this class doing the same, however I agree that this test has lot more going beyond the close which needs time synchronization, which can make the test brittle.
   
   Hi Pranav, The test asserts post line 566 starting from 3 sec sleep are validations for correct movement of inprogress buffers to completed list and their evictions, which is a functionality that this PR change does not interfere. I would suggest that we take them out and evaluate if pre-existing test coverage doesnt handle it already. If there are gaps, lets take it in separate PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on a diff in pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
steveloughran commented on code in PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#discussion_r1037390997


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##########
@@ -82,6 +84,16 @@ public class TestAbfsInputStream extends
       REDUCED_READ_BUFFER_AGE_THRESHOLD * 10; // 30 sec
   private static final int ALWAYS_READ_BUFFER_SIZE_TEST_FILE_SIZE = 16 * ONE_MB;
 
+  @After
+  public void afterTest() throws InterruptedException {

Review Comment:
   override `teardown()`, call superclass. that way you know the order of things happening



##########
hadoop-common-project/hadoop-common/src/main/resources/core-default.xml:
##########
@@ -2166,13 +2166,6 @@ The switch to turn S3A auditing on or off.
   <description>The AbstractFileSystem for gs: uris.</description>
 </property>
 
-<property>
-  <name>fs.azure.enable.readahead</name>
-  <value>false</value>

Review Comment:
   retain but set to true. why so? storediag will log it and so show that someone has explicitly said "readahead is safe here"



##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##########
@@ -82,6 +84,16 @@ public class TestAbfsInputStream extends
       REDUCED_READ_BUFFER_AGE_THRESHOLD * 10; // 30 sec
   private static final int ALWAYS_READ_BUFFER_SIZE_TEST_FILE_SIZE = 16 * ONE_MB;
 
+  @After
+  public void afterTest() throws InterruptedException {
+    //thread wait so that previous test's inProgress buffers are processed and removed.
+    Thread.sleep(10000l);

Review Comment:
   don't like this as it potentially ladds 10s to a test run, one which could still be a bit flaky.
   
   what about using `testResetReadBufferManager()`?
   



##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##########
@@ -495,6 +505,105 @@ public void testSuccessfulReadAhead() throws Exception {
     checkEvictedStatus(inputStream, 0, true);
   }
 
+  /**
+   * This test expects InProgressList is not purged by the inputStream close.
+   * The readBuffer will move to completedList and then finally should get evicted.
+   */
+  @Test
+  public void testStreamPurgeDuringReadAheadCallExecuting() throws Exception {
+    AbfsClient client = getMockAbfsClient();
+    AbfsRestOperation successOp = getMockRestOp();
+
+    Mockito.doAnswer(invocationOnMock -> {
+          //sleeping thread to mock the network latency from client to backend.
+          Thread.sleep(3000l);
+          return successOp;
+        })
+        .when(client)
+        .read(any(String.class), any(Long.class), any(byte[].class),
+            any(Integer.class), any(Integer.class), any(String.class),
+            any(String.class), any(TracingContext.class));
+
+    AbfsInputStream inputStream = getAbfsInputStream(client,
+        "testSuccessfulReadAhead.txt");
+    queueReadAheads(inputStream);
+
+    final ReadBufferManager readBufferManager
+        = ReadBufferManager.getBufferManager();
+
+    //Sleeping to give ReadBufferWorker to pick the readBuffers for processing.
+    Thread.sleep(1000l);
+
+    Assertions.assertThat(
+            getStreamRelatedBufferCount(readBufferManager.getInProgressCopiedList(),
+                inputStream))
+        .describedAs("InProgressList should have 3 elements")
+        .isEqualTo(3);
+    Assertions.assertThat(readBufferManager.getFreeListCopy().size())
+        .describedAs("FreeList should have 13 elements")
+        .isEqualTo(13);
+    Assertions.assertThat(readBufferManager.getCompletedReadListCopy().size())
+        .describedAs("CompletedList should have 0 elements")
+        .isEqualTo(0);
+
+    inputStream.close();
+
+    Assertions.assertThat(
+            getStreamRelatedBufferCount(readBufferManager.getInProgressCopiedList(),
+                inputStream))
+        .describedAs("InProgressList should have 3 elements")
+        .isEqualTo(3);
+    Assertions.assertThat(getStreamRelatedBufferCount(
+            readBufferManager.getCompletedReadListCopy(), inputStream))
+        .describedAs("CompletedList should have 0 elements")
+        .isEqualTo(0);
+    Assertions.assertThat(readBufferManager.getFreeListCopy().size())

Review Comment:
   use .hasSize(13) 



##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##########
@@ -495,6 +505,105 @@ public void testSuccessfulReadAhead() throws Exception {
     checkEvictedStatus(inputStream, 0, true);
   }
 
+  /**
+   * This test expects InProgressList is not purged by the inputStream close.
+   * The readBuffer will move to completedList and then finally should get evicted.
+   */
+  @Test
+  public void testStreamPurgeDuringReadAheadCallExecuting() throws Exception {
+    AbfsClient client = getMockAbfsClient();
+    AbfsRestOperation successOp = getMockRestOp();
+
+    Mockito.doAnswer(invocationOnMock -> {
+          //sleeping thread to mock the network latency from client to backend.
+          Thread.sleep(3000l);
+          return successOp;
+        })
+        .when(client)
+        .read(any(String.class), any(Long.class), any(byte[].class),
+            any(Integer.class), any(Integer.class), any(String.class),
+            any(String.class), any(TracingContext.class));
+
+    AbfsInputStream inputStream = getAbfsInputStream(client,
+        "testSuccessfulReadAhead.txt");
+    queueReadAheads(inputStream);
+
+    final ReadBufferManager readBufferManager
+        = ReadBufferManager.getBufferManager();
+
+    //Sleeping to give ReadBufferWorker to pick the readBuffers for processing.
+    Thread.sleep(1000l);
+
+    Assertions.assertThat(
+            getStreamRelatedBufferCount(readBufferManager.getInProgressCopiedList(),
+                inputStream))
+        .describedAs("InProgressList should have 3 elements")
+        .isEqualTo(3);
+    Assertions.assertThat(readBufferManager.getFreeListCopy().size())
+        .describedAs("FreeList should have 13 elements")
+        .isEqualTo(13);
+    Assertions.assertThat(readBufferManager.getCompletedReadListCopy().size())
+        .describedAs("CompletedList should have 0 elements")
+        .isEqualTo(0);
+
+    inputStream.close();
+
+    Assertions.assertThat(
+            getStreamRelatedBufferCount(readBufferManager.getInProgressCopiedList(),
+                inputStream))
+        .describedAs("InProgressList should have 3 elements")
+        .isEqualTo(3);
+    Assertions.assertThat(getStreamRelatedBufferCount(
+            readBufferManager.getCompletedReadListCopy(), inputStream))
+        .describedAs("CompletedList should have 0 elements")
+        .isEqualTo(0);
+    Assertions.assertThat(readBufferManager.getFreeListCopy().size())
+        .describedAs("FreeList should have 13 elements")
+        .isEqualTo(13);
+
+    //Sleep so that response from mockedClient gets back to ReadBufferWorker and
+    // can populate into completedList.
+    Thread.sleep(3000l);
+
+    Assertions.assertThat(getStreamRelatedBufferCount(
+            readBufferManager.getCompletedReadListCopy(), inputStream))
+        .describedAs("CompletedList should have 3 elements")
+        .isEqualTo(3);
+    Assertions.assertThat(readBufferManager.getFreeListCopy().size())
+        .describedAs("FreeList should have 13 elements")
+        .isEqualTo(13);
+    Assertions.assertThat(
+            getStreamRelatedBufferCount(readBufferManager.getInProgressCopiedList(),
+                inputStream))
+        .describedAs("InProgressList should have 0 elements")
+        .isEqualTo(0);
+
+    Thread.sleep(readBufferManager.getThresholdAgeMilliseconds());
+
+    readBufferManager.callTryEvict();
+    readBufferManager.callTryEvict();
+    readBufferManager.callTryEvict();
+
+    Assertions.assertThat(getStreamRelatedBufferCount(
+            readBufferManager.getCompletedReadListCopy(), inputStream))
+        .describedAs("CompletedList should have 0 elements")
+        .isEqualTo(0);
+    Assertions.assertThat(readBufferManager.getFreeListCopy().size())

Review Comment:
   use .hasSize()



##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##########
@@ -495,6 +509,199 @@ public void testSuccessfulReadAhead() throws Exception {
     checkEvictedStatus(inputStream, 0, true);
   }
 
+  /**
+   * This test expects InProgressList is not purged by the inputStream close.
+   * The readBuffer will move to completedList and then finally should get evicted.
+   */
+  @Test
+  public void testStreamPurgeDuringReadAheadCallExecuting() throws Exception {
+    AbfsClient client = getMockAbfsClient();
+    AbfsRestOperation successOp = getMockRestOp();
+
+    final AtomicInteger movedToInProgressList = new AtomicInteger(0);
+    final AtomicInteger movedToCompletedList = new AtomicInteger(0);
+    final AtomicBoolean preClosedAssertion = new AtomicBoolean(false);
+
+    Mockito.doAnswer(invocationOnMock -> {
+          movedToInProgressList.incrementAndGet();
+          while (movedToInProgressList.get() < 3 || !preClosedAssertion.get()) {
+
+          }
+          movedToCompletedList.incrementAndGet();
+          return successOp;
+        })
+        .when(client)

Review Comment:
   this is very brittle being timing based. normally I'd say "no" here, but I know I have a forthcoming pr which uses object.wait/notify to synchronize
   https://github.com/apache/hadoop/pull/5117/files#diff-e829dbaa29faf05ae0b331439e9aec3cd02248464a097c86a0227783337b9b76R370
   
   if this test causes problems it should do the same



##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##########
@@ -495,6 +505,105 @@ public void testSuccessfulReadAhead() throws Exception {
     checkEvictedStatus(inputStream, 0, true);
   }
 
+  /**
+   * This test expects InProgressList is not purged by the inputStream close.
+   * The readBuffer will move to completedList and then finally should get evicted.
+   */
+  @Test
+  public void testStreamPurgeDuringReadAheadCallExecuting() throws Exception {
+    AbfsClient client = getMockAbfsClient();
+    AbfsRestOperation successOp = getMockRestOp();
+
+    Mockito.doAnswer(invocationOnMock -> {
+          //sleeping thread to mock the network latency from client to backend.
+          Thread.sleep(3000l);
+          return successOp;
+        })
+        .when(client)
+        .read(any(String.class), any(Long.class), any(byte[].class),
+            any(Integer.class), any(Integer.class), any(String.class),
+            any(String.class), any(TracingContext.class));
+
+    AbfsInputStream inputStream = getAbfsInputStream(client,
+        "testSuccessfulReadAhead.txt");
+    queueReadAheads(inputStream);
+
+    final ReadBufferManager readBufferManager
+        = ReadBufferManager.getBufferManager();
+
+    //Sleeping to give ReadBufferWorker to pick the readBuffers for processing.
+    Thread.sleep(1000l);
+
+    Assertions.assertThat(
+            getStreamRelatedBufferCount(readBufferManager.getInProgressCopiedList(),
+                inputStream))
+        .describedAs("InProgressList should have 3 elements")
+        .isEqualTo(3);
+    Assertions.assertThat(readBufferManager.getFreeListCopy().size())
+        .describedAs("FreeList should have 13 elements")
+        .isEqualTo(13);
+    Assertions.assertThat(readBufferManager.getCompletedReadListCopy().size())
+        .describedAs("CompletedList should have 0 elements")
+        .isEqualTo(0);
+
+    inputStream.close();
+
+    Assertions.assertThat(
+            getStreamRelatedBufferCount(readBufferManager.getInProgressCopiedList(),
+                inputStream))
+        .describedAs("InProgressList should have 3 elements")
+        .isEqualTo(3);
+    Assertions.assertThat(getStreamRelatedBufferCount(
+            readBufferManager.getCompletedReadListCopy(), inputStream))
+        .describedAs("CompletedList should have 0 elements")
+        .isEqualTo(0);
+    Assertions.assertThat(readBufferManager.getFreeListCopy().size())
+        .describedAs("FreeList should have 13 elements")
+        .isEqualTo(13);
+
+    //Sleep so that response from mockedClient gets back to ReadBufferWorker and
+    // can populate into completedList.
+    Thread.sleep(3000l);
+
+    Assertions.assertThat(getStreamRelatedBufferCount(
+            readBufferManager.getCompletedReadListCopy(), inputStream))
+        .describedAs("CompletedList should have 3 elements")
+        .isEqualTo(3);
+    Assertions.assertThat(readBufferManager.getFreeListCopy().size())
+        .describedAs("FreeList should have 13 elements")
+        .isEqualTo(13);
+    Assertions.assertThat(
+            getStreamRelatedBufferCount(readBufferManager.getInProgressCopiedList(),
+                inputStream))
+        .describedAs("InProgressList should have 0 elements")
+        .isEqualTo(0);
+
+    Thread.sleep(readBufferManager.getThresholdAgeMilliseconds());
+
+    readBufferManager.callTryEvict();
+    readBufferManager.callTryEvict();
+    readBufferManager.callTryEvict();
+
+    Assertions.assertThat(getStreamRelatedBufferCount(
+            readBufferManager.getCompletedReadListCopy(), inputStream))
+        .describedAs("CompletedList should have 0 elements")
+        .isEqualTo(0);
+    Assertions.assertThat(readBufferManager.getFreeListCopy().size())
+        .describedAs("FreeList should have 16 elements")
+        .isEqualTo(16);
+  }
+
+  private int getStreamRelatedBufferCount(final List<ReadBuffer> bufferList,
+      final AbfsInputStream inputStream) {
+    int count = 0;

Review Comment:
   prefer java8 streaming
   ```
   bufferList.stream()
     .filter(buffer -> buffer.getStream() == inputStream)
     .count()
   ```
   



##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##########
@@ -495,6 +505,105 @@ public void testSuccessfulReadAhead() throws Exception {
     checkEvictedStatus(inputStream, 0, true);
   }
 
+  /**
+   * This test expects InProgressList is not purged by the inputStream close.
+   * The readBuffer will move to completedList and then finally should get evicted.
+   */
+  @Test
+  public void testStreamPurgeDuringReadAheadCallExecuting() throws Exception {
+    AbfsClient client = getMockAbfsClient();
+    AbfsRestOperation successOp = getMockRestOp();
+
+    Mockito.doAnswer(invocationOnMock -> {
+          //sleeping thread to mock the network latency from client to backend.
+          Thread.sleep(3000l);
+          return successOp;
+        })
+        .when(client)
+        .read(any(String.class), any(Long.class), any(byte[].class),
+            any(Integer.class), any(Integer.class), any(String.class),
+            any(String.class), any(TracingContext.class));
+
+    AbfsInputStream inputStream = getAbfsInputStream(client,
+        "testSuccessfulReadAhead.txt");
+    queueReadAheads(inputStream);
+
+    final ReadBufferManager readBufferManager
+        = ReadBufferManager.getBufferManager();
+
+    //Sleeping to give ReadBufferWorker to pick the readBuffers for processing.
+    Thread.sleep(1000l);

Review Comment:
   1_000L



##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##########
@@ -495,6 +505,105 @@ public void testSuccessfulReadAhead() throws Exception {
     checkEvictedStatus(inputStream, 0, true);
   }
 
+  /**
+   * This test expects InProgressList is not purged by the inputStream close.
+   * The readBuffer will move to completedList and then finally should get evicted.
+   */
+  @Test
+  public void testStreamPurgeDuringReadAheadCallExecuting() throws Exception {
+    AbfsClient client = getMockAbfsClient();
+    AbfsRestOperation successOp = getMockRestOp();
+
+    Mockito.doAnswer(invocationOnMock -> {
+          //sleeping thread to mock the network latency from client to backend.
+          Thread.sleep(3000l);
+          return successOp;
+        })
+        .when(client)
+        .read(any(String.class), any(Long.class), any(byte[].class),
+            any(Integer.class), any(Integer.class), any(String.class),
+            any(String.class), any(TracingContext.class));
+
+    AbfsInputStream inputStream = getAbfsInputStream(client,
+        "testSuccessfulReadAhead.txt");
+    queueReadAheads(inputStream);
+
+    final ReadBufferManager readBufferManager
+        = ReadBufferManager.getBufferManager();
+
+    //Sleeping to give ReadBufferWorker to pick the readBuffers for processing.
+    Thread.sleep(1000l);
+
+    Assertions.assertThat(
+            getStreamRelatedBufferCount(readBufferManager.getInProgressCopiedList(),
+                inputStream))
+        .describedAs("InProgressList should have 3 elements")
+        .isEqualTo(3);
+    Assertions.assertThat(readBufferManager.getFreeListCopy().size())

Review Comment:
   use .hasSize(13) in the assert, so assertj will provide info about the list if there's a mismatch



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
steveloughran commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1344513110

   #5205 is another followup with the logging and a probe through path capabilities; this allows me to verify that backports are in.
   
   an abfs instance is vulnerable if
   ```java
   fs.hasPathcapability("fs.capability.paths.acls") && !fs.hasPathcapability("HADOOP-18546")
   ```
   if that holds, then you need to make sure readahead is disabled/no queue depth. setting queue depth is the one guaranteed to work everywhere.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
steveloughran commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1336158644

   sorry, should have been clearer: a local spark build and spark-shell process is ideal for replication and validation -as all splits are processed in different worker threads in that process, it recreates the exact failure mode.
   
   script you can take and tune for your system; uses the mkcsv command in cloudstore JAR.
   
   I am going to add this as a scalatest suite in the same module
   https://github.com/hortonworks-spark/cloud-integration/blob/master/spark-cloud-integration/src/scripts/validating-csv-record-io.sc


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1341320673

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 2 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  7s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  29m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  26m 59s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  22m 24s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 47s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 16s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 34s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  24m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 41s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 41s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 57s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 58s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 38s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m  4s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 13s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 240m 33s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/13/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5176 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux d56a6605ffc1 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 455a472687bcf7650889e5552824297b90cad118 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/13/testReport/ |
   | Max. process+thread count | 1277 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/13/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] pranavsaxena-microsoft commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
pranavsaxena-microsoft commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1343863437

   > getting a test failure locally, ITestReadBufferManager failing as one of its asserts isn't valid.
   > 
   > going to reopen the jira @pranavsaxena-microsoft can you see if you can replicate the problem and add a followup patch (use the same jira). do make sure you are running this test _first_, and that it is failing for you. thanks
   > 
   > ```
   > INFO] Running org.apache.hadoop.fs.azurebfs.services.ITestReadBufferManager
   > [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 3.816 s <<< FAILURE! - in org.apache.hadoop.fs.azurebfs.services.ITestReadBufferManager
   > [ERROR] testPurgeBufferManagerForSequentialStream(org.apache.hadoop.fs.azurebfs.services.ITestReadBufferManager)  Time elapsed: 1.995 s  <<< FAILURE!
   > java.lang.AssertionError:
   > [Buffers associated with closed input streams shouldn't be present]
   > Expecting:
   >  <org.apache.hadoop.fs.azurebfs.services.AbfsInputStream@5a709b9b{counters=((stream_read_bytes_backwards_on_seek=0) (stream_read_seek_forward_operations=0) (stream_read_seek_operations=0) (read_ahead_bytes_read=16384) (stream_read_seek_bytes_skipped=0) (stream_read_bytes=1) (action_http_get_request=0) (bytes_read_buffer=1) (seek_in_buffer=0) (remote_bytes_read=81920) (action_http_get_request.failures=0) (stream_read_operations=1) (remote_read_op=8) (stream_read_seek_backward_operations=0));
   > gauges=();
   > minimums=((action_http_get_request.failures.min=-1) (action_http_get_request.min=-1));
   > maximums=((action_http_get_request.max=-1) (action_http_get_request.failures.max=-1));
   > means=((action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)));
   > }AbfsInputStream@(1517329307){StreamStatistics{counters=((stream_read_seek_bytes_skipped=0) (seek_in_buffer=0) (stream_read_bytes=1) (stream_read_seek_operations=0) (remote_bytes_read=81920) (stream_read_operations=1) (bytes_read_buffer=1) (action_http_get_request.failures=0) (action_http_get_request=0) (stream_read_seek_forward_operations=0) (stream_read_bytes_backwards_on_seek=0) (read_ahead_bytes_read=16384) (stream_read_seek_backward_operations=0) (remote_read_op=8));
   > gauges=();
   > minimums=((action_http_get_request.min=-1) (action_http_get_request.failures.min=-1));
   > maximums=((action_http_get_request.max=-1) (action_http_get_request.failures.max=-1));
   > means=((action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)) (action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)));
   > }}>
   > not to be equal to:
   >  <org.apache.hadoop.fs.azurebfs.services.AbfsInputStream@5a709b9b{counters=((bytes_read_buffer=1) (stream_read_seek_forward_operations=0) (read_ahead_bytes_read=16384) (stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0) (stream_read_seek_backward_operations=0) (remote_bytes_read=81920) (stream_read_operations=1) (stream_read_bytes_backwards_on_seek=0) (action_http_get_request.failures=0) (seek_in_buffer=0) (action_http_get_request=0) (remote_read_op=8) (stream_read_bytes=1));
   > gauges=();
   > minimums=((action_http_get_request.min=-1) (action_http_get_request.failures.min=-1));
   > maximums=((action_http_get_request.max=-1) (action_http_get_request.failures.max=-1));
   > means=((action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)) (action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)));
   > }AbfsInputStream@(1517329307){StreamStatistics{counters=((remote_read_op=8) (stream_read_seek_forward_operations=0) (stream_read_seek_backward_operations=0) (read_ahead_bytes_read=16384) (action_http_get_request.failures=0) (bytes_read_buffer=1) (stream_read_seek_operations=0) (stream_read_bytes=1) (stream_read_bytes_backwards_on_seek=0) (action_http_get_request=0) (seek_in_buffer=0) (stream_read_seek_bytes_skipped=0) (remote_bytes_read=81920) (stream_read_operations=1));
   > gauges=();
   > minimums=((action_http_get_request.failures.min=-1) (action_http_get_request.min=-1));
   > maximums=((action_http_get_request.failures.max=-1) (action_http_get_request.max=-1));
   > means=((action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)));
   > }}>
   > 
   > 	at org.apache.hadoop.fs.azurebfs.services.ITestReadBufferManager.assertListDoesnotContainBuffersForIstream(ITestReadBufferManager.java:145)
   > 	at org.apache.hadoop.fs.azurebfs.services.ITestReadBufferManager.testPurgeBufferManagerForSequentialStream(ITestReadBufferManager.java:120)
   > 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   > 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   > 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   > 	at java.lang.reflect.Method.invoke(Method.java:498)
   > 	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
   > 	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   > 	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
   > 	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   > 	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
   > 	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
   > 	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
   > 	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
   > 	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
   > 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   > 	at java.lang.Thread.run(Thread.java:750)
   > ```
   
   Thanks. I am checking on it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1333206506

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   1m 40s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 50s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 47s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m  1s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 19s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 108m 47s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/1/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5176 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 363e898d8745 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 54e706a8441579fdefb99176773522dcf54ddf36 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/1/testReport/ |
   | Max. process+thread count | 609 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] pranavsaxena-microsoft commented on a diff in pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
pranavsaxena-microsoft commented on code in PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#discussion_r1037084565


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##########
@@ -495,6 +509,199 @@ public void testSuccessfulReadAhead() throws Exception {
     checkEvictedStatus(inputStream, 0, true);
   }
 
+  /**
+   * This test expects InProgressList is not purged by the inputStream close.
+   * The readBuffer will move to completedList and then finally should get evicted.
+   */
+  @Test
+  public void testStreamPurgeDuringReadAheadCallExecuting() throws Exception {
+    AbfsClient client = getMockAbfsClient();
+    AbfsRestOperation successOp = getMockRestOp();
+
+    final AtomicInteger movedToInProgressList = new AtomicInteger(0);
+    final AtomicInteger movedToCompletedList = new AtomicInteger(0);
+    final AtomicBoolean preClosedAssertion = new AtomicBoolean(false);
+
+    Mockito.doAnswer(invocationOnMock -> {
+          movedToInProgressList.incrementAndGet();
+          while (movedToInProgressList.get() < 3 || !preClosedAssertion.get()) {
+
+          }
+          movedToCompletedList.incrementAndGet();
+          return successOp;
+        })
+        .when(client)

Review Comment:
   Have taken the change of sleep and assertion on freeList also included in the tests.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] pranavsaxena-microsoft commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
pranavsaxena-microsoft commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1334834625

   ----- Test results -----
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
   [ERROR] Errors:
   [ERROR]   TestExponentialRetryPolicy.testOperationOnAccountIdle:216 » AccessDenied Opera...
   [INFO]
   [ERROR] Tests run: 111, Failures: 1, Errors: 1, Skipped: 1
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut test timed o...
   [ERROR]   ITestAzureBlobFileSystemOauth.testBlobDataContributor:84 » AccessDenied Operat...
   [ERROR]   ITestAzureBlobFileSystemOauth.testBlobDataReader:143 » AccessDenied Operation ...
   [INFO]
   [ERROR] Tests run: 567, Failures: 0, Errors: 3, Skipped: 99
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   ITestAbfsFileSystemContractSeek.testSeekAndReadWithReadAhead:130->assertNoIncrementInRemoteReadOps:258 [Number of remote read ops shouldn't increase] expected:<[1]L> but was:<[2]L>
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The ownership o...
   [INFO]
   [ERROR] Tests run: 335, Failures: 1, Errors: 1, Skipped: 54
   
   Time taken: 9 mins 40 secs.
   Find test result for the combination (AppendBlob-HNS-OAuth) in: dev-support/testlogs/2022-12-02_06-12-45/Test-Logs-AppendBlob-HNS-OAuth.txt
    Consolidated test result is saved in: dev-support/testlogs/2022-12-02_06-12-45/Test-Results.txt
   ------------------------
   :::: AGGREGATED TEST RESULT ::::
   
   HNS-OAuth
   ========================
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
   [ERROR] Errors:
   [ERROR]   TestExponentialRetryPolicy.testOperationOnAccountIdle:216 » AccessDenied Opera...
   [INFO]
   [ERROR] Tests run: 111, Failures: 1, Errors: 1, Skipped: 1
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut test timed o...
   [ERROR]   ITestAzureBlobFileSystemOauth.testBlobDataContributor:84 » AccessDenied Operat...
   [ERROR]   ITestAzureBlobFileSystemOauth.testBlobDataReader:143 » AccessDenied Operation ...
   [INFO]
   [ERROR] Tests run: 567, Failures: 0, Errors: 3, Skipped: 99
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   ITestAbfsFileSystemContractSeek.testSeekAndReadWithReadAhead:130->assertNoIncrementInRemoteReadOps:258 [Number of remote read ops shouldn't increase] expected:<[1]L> but was:<[2]L>
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The ownership o...
   [INFO]
   [ERROR] Tests run: 335, Failures: 1, Errors: 1, Skipped: 54
   
   HNS-SharedKey
   ========================
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
   [ERROR]   TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:181->fuzzyValidate:64 The actual value 9 is not within the expected range: [5.60, 8.40].
   [ERROR] Errors:
   [ERROR]   TestExponentialRetryPolicy.testOperationOnAccountIdle:216 » AccessDenied Opera...
   [INFO]
   [ERROR] Tests run: 111, Failures: 2, Errors: 1, Skipped: 2
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut test timed o...
   [INFO]
   [ERROR] Tests run: 567, Failures: 0, Errors: 1, Skipped: 54
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   ITestAbfsFileSystemContractSeek.testSeekAndReadWithReadAhead:130->assertNoIncrementInRemoteReadOps:258 [Number of remote read ops shouldn't increase] expected:<[1]L> but was:<[2]L>
   [INFO]
   [ERROR] Tests run: 335, Failures: 1, Errors: 0, Skipped: 41
   
   NonHNS-SharedKey
   ========================
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
   [ERROR] Errors:
   [ERROR]   TestExponentialRetryPolicy.testOperationOnAccountIdle:216 » AccessDenied Opera...
   [INFO]
   [ERROR] Tests run: 111, Failures: 1, Errors: 1, Skipped: 2
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:344->lambda$testAcquireRetry$6:345 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 567, Failures: 0, Errors: 1, Skipped: 277
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   ITestAbfsTerasort.test_110_teragen:244->executeStage:211->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89 teragen(1000, abfs://testcontainer@pranavsaxenanonhns.dfs.core.windows.net/ITestAbfsTerasort/sortin) failed expected:<0> but was:<1>
   [ERROR]   ITestAbfsFileSystemContractSeek.testSeekAndReadWithReadAhead:130->assertNoIncrementInRemoteReadOps:258 [Number of remote read ops shouldn't increase] expected:<[1]L> but was:<[2]L>
   [ERROR] Errors:
   [ERROR]   ITestAbfsJobThroughManifestCommitter.test_0420_validateJob » OutputValidation ...
   [ERROR]   ITestAbfsManifestCommitProtocol.testCommitLifecycle » OutputValidation `abfs:/...
   [ERROR]   ITestAbfsManifestCommitProtocol.testCommitterWithDuplicatedCommit » OutputValidation
   [ERROR]   ITestAbfsManifestCommitProtocol.testConcurrentCommitTaskWithSubDir » OutputValidation
   [ERROR]   ITestAbfsManifestCommitProtocol.testMapFileOutputCommitter » OutputValidation ...
   [ERROR]   ITestAbfsManifestCommitProtocol.testOutputFormatIntegration » OutputValidation
   [ERROR]   ITestAbfsManifestCommitProtocol.testParallelJobsToAdjacentPaths » OutputValidation
   [ERROR]   ITestAbfsManifestCommitProtocol.testTwoTaskAttemptsCommit » OutputValidation `...
   [INFO]
   [ERROR] Tests run: 335, Failures: 2, Errors: 8, Skipped: 46
   
   AppendBlob-HNS-OAuth
   ========================
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider"
   [ERROR] Errors:
   [ERROR]   TestExponentialRetryPolicy.testOperationOnAccountIdle:216 » AccessDenied Opera...
   [INFO]
   [ERROR] Tests run: 111, Failures: 1, Errors: 1, Skipped: 1
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut test timed o...
   [ERROR]   ITestAzureBlobFileSystemOauth.testBlobDataContributor:84 » AccessDenied Operat...
   [ERROR]   ITestAzureBlobFileSystemOauth.testBlobDataReader:143 » AccessDenied Operation ...
   [INFO]
   [ERROR] Tests run: 567, Failures: 0, Errors: 3, Skipped: 99
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   ITestAbfsFileSystemContractSeek.testSeekAndReadWithReadAhead:130->assertNoIncrementInRemoteReadOps:258 [Number of remote read ops shouldn't increase] expected:<[1]L> but was:<[2]L>
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The ownership o...
   [INFO]
   [ERROR] Tests run: 335, Failures: 1, Errors: 1, Skipped: 54
   
   Time taken: 40 mins 46 secs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1333598564

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   1m 55s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 1 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 50s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m  1s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 23s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/6/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 7 new + 0 unchanged - 0 fixed = 7 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 36s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  7s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 105m 40s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/6/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5176 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux feebaa610375 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ddefd2e55a4742608b540084f70e7a32023d1edc |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/6/testReport/ |
   | Max. process+thread count | 531 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1335029393

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   1m 37s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  |
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 2 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 22s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  29m 14s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  29m  7s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  26m 57s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 54s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 49s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 14s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m 26s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  28m 47s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  29m  5s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  29m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  29m 29s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  29m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   4m 54s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/8/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 7 new + 1 unchanged - 0 fixed = 8 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   4m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 47s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 19s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  28m 49s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 22s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 50s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 22s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 291m  0s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/8/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5176 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux 56fde3347f0c 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fc833f2183874cbe71322c933de924e8cdb0bb19 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/8/testReport/ |
   | Max. process+thread count | 3054 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/8/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] pranavsaxena-microsoft commented on a diff in pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
pranavsaxena-microsoft commented on code in PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#discussion_r1037839979


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##########
@@ -82,6 +84,16 @@ public class TestAbfsInputStream extends
       REDUCED_READ_BUFFER_AGE_THRESHOLD * 10; // 30 sec
   private static final int ALWAYS_READ_BUFFER_SIZE_TEST_FILE_SIZE = 16 * ONE_MB;
 
+  @After
+  public void afterTest() throws InterruptedException {
+    //thread wait so that previous test's inProgress buffers are processed and removed.
+    Thread.sleep(10000l);

Review Comment:
   Have refactored in the new revision:
   1. Override of teardown()
   2. Usage of testResetReadBufferManager



##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##########
@@ -82,6 +84,16 @@ public class TestAbfsInputStream extends
       REDUCED_READ_BUFFER_AGE_THRESHOLD * 10; // 30 sec
   private static final int ALWAYS_READ_BUFFER_SIZE_TEST_FILE_SIZE = 16 * ONE_MB;
 
+  @After
+  public void afterTest() throws InterruptedException {

Review Comment:
   Have taken it in new revision.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1335079577

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   2m 11s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 2 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 52s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  35m 20s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  30m 31s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  25m 10s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   5m 36s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 52s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 12s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  31m 14s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 36s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  30m 24s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  30m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 58s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  24m 58s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   4m 40s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/9/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   3m 25s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 36s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m  6s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 24s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m  1s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 38s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 22s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 289m 48s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/9/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5176 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux 2fa030dcbfa6 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ac1e758b2db4bb8570c918bb908c9ae2d37b8099 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/9/testReport/ |
   | Max. process+thread count | 3137 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/9/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
steveloughran commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1344461171

   (oh, and on my personal backport I have added a TRACE log in the buffer manager to record its state; abfsInputStream.toString does it too.
   ```
     private ReadBufferManager() {
       LOGGER.trace("Creating readbuffer manager with HADOOP-18546 patch");
     }
   ```
   think i will retain those internally for a debug option


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
steveloughran commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1344325189

   update: full e2e tests through spark shell are happy! i was trying to do scalatest tests for this but not been able to replicate the test failure through my test suite (which rebuilds the .csv file every run, so was also v. slow). with manual tests running and #5198 in then all is good. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1333261260

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 1 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  11m 50s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/3/artifact/out/branch-mvninstall-root.txt) |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   1m 56s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |  12m 28s |  |  branch has errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/3/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 43s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  9s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not generate ASF License warnings.  |
   |  |   |  63m  6s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/3/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5176 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 39080ddf73bd 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8cad3d58030c13b223a72547d59ca34ae3c6c469 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/3/testReport/ |
   | Max. process+thread count | 528 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1333265717

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   1m 22s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 1 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  38m 39s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/2/artifact/out/branch-mvninstall-root.txt) |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   1m  0s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 36s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 9 new + 0 unchanged - 0 fixed = 9 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 11s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  8s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 103m 32s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/2/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5176 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux b8f4f5ebdc2d 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 39c3403a54c175930514fc3f1f52dfd4d7af673e |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/2/testReport/ |
   | Max. process+thread count | 612 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1333294428

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 1 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  39m 28s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/4/artifact/out/branch-mvninstall-root.txt) |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   0m 57s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 50s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 51s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 36s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 27s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/4/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 33s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 18s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 100m 32s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/4/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5176 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 05bd079ff98b 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a8eb44b1d81e76128e8e610da25f51b3eb34d652 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/4/testReport/ |
   | Max. process+thread count | 619 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
steveloughran commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1339526719

   clarified the cleanup problem


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1333335285

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   1m 44s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 1 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  12m  4s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/5/artifact/out/branch-mvninstall-root.txt) |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   1m 56s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  27m  8s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 24s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/5/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 41s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 21s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not generate ASF License warnings.  |
   |  |   |  81m 29s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/5/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5176 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 944864c12a70 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 11ff0439eac477095ba6bba1d52318291c84e557 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/5/testReport/ |
   | Max. process+thread count | 610 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] pranavsaxena-microsoft commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
pranavsaxena-microsoft commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1339051008

   > sorry, should have been clearer: a local spark build and spark-shell process is ideal for replication and validation -as all splits are processed in different worker threads in that process, it recreates the exact failure mode.
   > 
   > script you can take and tune for your system; uses the mkcsv command in cloudstore JAR.
   > 
   > I am going to add this as a scalatest suite in the same module https://github.com/hortonworks-spark/cloud-integration/blob/master/spark-cloud-integration/src/scripts/validating-csv-record-io.sc
   
   Thanks for the script. I had applied following changes on the script: https://github.com/pranavsaxena-microsoft/cloud-integration/commit/1d779f22150be3102635819e4525967573602dd9.
   
   On trunk's jar, got exception:
   ```
   22/12/05 23:51:27 ERROR Executor: Exception in task 4.0 in stage 1.0 (TID 5)
   java.lang.NullPointerException: Null value appeared in non-nullable field:
   - field (class: "scala.Long", name: "rowId")
   - root class: "$line85.$read.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.CsvRecord"
   If the schema is inferred from a Scala tuple/case class, or a Java bean, please try to use scala.Option[_] or other nullable types (e.g. java.lang.Integer instead of int/scala.Int).
           at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_0_0$(Unknown Source)
           at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown Source)
           at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
           at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
           at scala.collection.Iterator.foreach(Iterator.scala:943)
           at scala.collection.Iterator.foreach$(Iterator.scala:943)
           at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
           at org.apache.spark.rdd.RDD.$anonfun$foreach$2(RDD.scala:1001)
           at org.apache.spark.rdd.RDD.$anonfun$foreach$2$adapted(RDD.scala:1001)
           at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2302)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)
           at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
           at org.apache.spark.scheduler.Task.run(Task.scala:139)
           at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1502)
           at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
           at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:750)
   ```
   
   Using the jar of the PR's code:
   ```
   minimums=((action_http_get_request.min=-1) (action_http_get_request.failures.min=-1));
   maximums=((action_http_get_request.max=-1) (action_http_get_request.failures.max=-1));
   means=((action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)));
   }} 
   22/12/06 01:04:22 INFO TaskSetManager: Finished task 8.0 in stage 1.0 (TID 9) in 14727 ms on snvijaya-Virtual-Machine.mshome.net (executor driver) (9/9)
   22/12/06 01:04:22 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
   22/12/06 01:04:22 INFO DAGScheduler: ResultStage 1 (foreach at /home/snvijaya/Desktop/cloud-integration/spark-cloud-integration/src/scripts/validating-csv-record-io.sc:46) finished in 115.333 s
   22/12/06 01:04:22 INFO DAGScheduler: Job 1 is finished. Cancelling potential speculative or zombie tasks for this job
   22/12/06 01:04:22 INFO TaskSchedulerImpl: Killing all running tasks in stage 1: Stage finished
   22/12/06 01:04:22 INFO DAGScheduler: Job 1 finished: foreach at /home/snvijaya/Desktop/cloud-integration/spark-cloud-integration/src/scripts/validating-csv-record-io.sc:46, took 115.337621 s
   res35: String = validation completed [start: string, rowId: bigint ... 6 more fields]
   ```
   
   Commands executed:
   ```
   :load /home/snvijaya/Desktop/cloud-integration/spark-cloud-integration/src/scripts/validating-csv-record-io.sc
   validateDS(rowsDS)
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran merged pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
steveloughran merged PR #5176:
URL: https://github.com/apache/hadoop/pull/5176


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1336949483

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   1m 43s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 2 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 20s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  29m 18s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  27m 34s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  23m 10s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  1s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 19s |  |  trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  |  trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m  1s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  26m  9s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  26m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 39s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  23m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 24s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 55s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 16s |  |  the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  |  the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m 38s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 17s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 45s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 28s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 10s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 257m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/11/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5176 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux 3c05404c8f2b 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 49025980dbae003a5ef2ba31c29f8ee8bc485b66 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/11/testReport/ |
   | Max. process+thread count | 1376 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5176/11/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

Posted by GitBox <gi...@apache.org>.
steveloughran commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1343033763

   getting a test failure locally, ITestReadBufferManager failing as one of its asserts isn't valid.
   
   going to reopen the jira
   @pranavsaxena-microsoft can you see if you can replicate the problem and add a followup patch (use the same jira). 
   do make sure you are running this test *first*, and that it is failing for you. thanks
   
   ```
   INFO] Running org.apache.hadoop.fs.azurebfs.services.ITestReadBufferManager
   [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 3.816 s <<< FAILURE! - in org.apache.hadoop.fs.azurebfs.services.ITestReadBufferManager
   [ERROR] testPurgeBufferManagerForSequentialStream(org.apache.hadoop.fs.azurebfs.services.ITestReadBufferManager)  Time elapsed: 1.995 s  <<< FAILURE!
   java.lang.AssertionError:
   [Buffers associated with closed input streams shouldn't be present]
   Expecting:
    <org.apache.hadoop.fs.azurebfs.services.AbfsInputStream@5a709b9b{counters=((stream_read_bytes_backwards_on_seek=0) (stream_read_seek_forward_operations=0) (stream_read_seek_operations=0) (read_ahead_bytes_read=16384) (stream_read_seek_bytes_skipped=0) (stream_read_bytes=1) (action_http_get_request=0) (bytes_read_buffer=1) (seek_in_buffer=0) (remote_bytes_read=81920) (action_http_get_request.failures=0) (stream_read_operations=1) (remote_read_op=8) (stream_read_seek_backward_operations=0));
   gauges=();
   minimums=((action_http_get_request.failures.min=-1) (action_http_get_request.min=-1));
   maximums=((action_http_get_request.max=-1) (action_http_get_request.failures.max=-1));
   means=((action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)));
   }AbfsInputStream@(1517329307){StreamStatistics{counters=((stream_read_seek_bytes_skipped=0) (seek_in_buffer=0) (stream_read_bytes=1) (stream_read_seek_operations=0) (remote_bytes_read=81920) (stream_read_operations=1) (bytes_read_buffer=1) (action_http_get_request.failures=0) (action_http_get_request=0) (stream_read_seek_forward_operations=0) (stream_read_bytes_backwards_on_seek=0) (read_ahead_bytes_read=16384) (stream_read_seek_backward_operations=0) (remote_read_op=8));
   gauges=();
   minimums=((action_http_get_request.min=-1) (action_http_get_request.failures.min=-1));
   maximums=((action_http_get_request.max=-1) (action_http_get_request.failures.max=-1));
   means=((action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)) (action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)));
   }}>
   not to be equal to:
    <org.apache.hadoop.fs.azurebfs.services.AbfsInputStream@5a709b9b{counters=((bytes_read_buffer=1) (stream_read_seek_forward_operations=0) (read_ahead_bytes_read=16384) (stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0) (stream_read_seek_backward_operations=0) (remote_bytes_read=81920) (stream_read_operations=1) (stream_read_bytes_backwards_on_seek=0) (action_http_get_request.failures=0) (seek_in_buffer=0) (action_http_get_request=0) (remote_read_op=8) (stream_read_bytes=1));
   gauges=();
   minimums=((action_http_get_request.min=-1) (action_http_get_request.failures.min=-1));
   maximums=((action_http_get_request.max=-1) (action_http_get_request.failures.max=-1));
   means=((action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)) (action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)));
   }AbfsInputStream@(1517329307){StreamStatistics{counters=((remote_read_op=8) (stream_read_seek_forward_operations=0) (stream_read_seek_backward_operations=0) (read_ahead_bytes_read=16384) (action_http_get_request.failures=0) (bytes_read_buffer=1) (stream_read_seek_operations=0) (stream_read_bytes=1) (stream_read_bytes_backwards_on_seek=0) (action_http_get_request=0) (seek_in_buffer=0) (stream_read_seek_bytes_skipped=0) (remote_bytes_read=81920) (stream_read_operations=1));
   gauges=();
   minimums=((action_http_get_request.failures.min=-1) (action_http_get_request.min=-1));
   maximums=((action_http_get_request.failures.max=-1) (action_http_get_request.max=-1));
   means=((action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)));
   }}>
   
   	at org.apache.hadoop.fs.azurebfs.services.ITestReadBufferManager.assertListDoesnotContainBuffersForIstream(ITestReadBufferManager.java:145)
   	at org.apache.hadoop.fs.azurebfs.services.ITestReadBufferManager.testPurgeBufferManagerForSequentialStream(ITestReadBufferManager.java:120)
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   	at java.lang.reflect.Method.invoke(Method.java:498)
   	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
   	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
   	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
   	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
   	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
   	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
   	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
   	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   	at java.lang.Thread.run(Thread.java:750)
   
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org