You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "saxenapranav (via GitHub)" <gi...@apache.org> on 2024/03/18 06:00:31 UTC

[PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]

saxenapranav opened a new pull request, #6633:
URL: https://github.com/apache/hadoop/pull/6633

   WIP Draft PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2008948535

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 19 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 57s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 28s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m 49s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/10/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 136 new + 18 unchanged - 0 fixed = 154 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 25s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/10/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 5 new + 15 unchanged - 0 fixed = 20 total (was 15)  |
   | -1 :x: |  javadoc  |   0m 26s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/10/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) |  hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08 with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 generated 5 new + 15 unchanged - 0 fixed = 20 total (was 15)  |
   | -1 :x: |  spotbugs  |   1m  8s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/10/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 18 new + 0 unchanged - 0 fixed = 18 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  35m 17s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 31s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 131m 27s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Unread field:AbfsConnectionManager.java:[line 113] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 65] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 107] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 70] |
   |  |  Unused field:AbfsApacheHttpClient.java |
   |  |  Unused field:AbfsApacheHttpClient.java |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.isResponseAvailable(int)  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.isResponseAvailable(int)  At AbfsConnFactory.java:[line 92] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.receiveResponseHeader()  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.receiveResponseHeader()  At AbfsConnFactory.java:[line 113] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsConnFactory.java:[line 100] |
   |  |  Dead store to startTime in org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:[line 337] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE isn't final and can't be protected from malicious code  At KeepAliveCache.java:be protected from malicious code  At KeepAliveCache.java:[line 71] |
   |  |  Exception is caught when Exception is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:[line 131] |
   |  |  Class org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache defines non-transient non-serializable instance field thread  In KeepAliveCache.java:instance field thread  In KeepAliveCache.java |
   |  |  Write to static field org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:[line 47] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup() makes inefficient use of keySet iterator instead of entrySet iterator  At KeepAliveCache.java:keySet iterator instead of entrySet iterator  At KeepAliveCache.java:[line 106] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$ClientVector doesn't override java.util.Vector.equals(Object)  At KeepAliveCache.java:At KeepAliveCache.java:[line 1] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveEntry be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 247-250] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveKey be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 220-239] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/10/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 7327f3a96401 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1c4411636b9521cf689d721a0ee444a3f96657ef |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/10/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/10/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2034030038

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 21 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 37s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 58s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/44/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 11 new + 18 unchanged - 0 fixed = 29 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 27s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 32s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 128m 54s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/44/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux ffd477d5591e 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6ed1dc3a4832a00f0c77a2cbe7b7338fa450007f |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/44/testReport/ |
   | Max. process+thread count | 554 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/44/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1562462047


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientThrottlingIntercept.java:
##########
@@ -124,23 +125,24 @@ static AbfsClientThrottlingIntercept initializeSingleton(AbfsConfiguration abfsC
    * @return true if the operation is throttled and has some bytes to transfer.
    */
   private boolean updateBytesTransferred(boolean isThrottledOperation,
-      HttpOperation abfsHttpOperation) {
+      AbfsHttpOperation abfsHttpOperation) {
     return isThrottledOperation && abfsHttpOperation.getExpectedBytesToBeSent() > 0;
   }
 
   /**
    * Updates the metrics for successful and failed read and write operations.
+   *
    * @param operationType Only applicable for read and write operations.
-   * @param abfsHttpOperation Used for status code and data transferred.
+   * @param httpOperation Used for status code and data transferred.

Review Comment:
   nit : param name can be kept abfsHttpOperation only



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2011634130

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |  12m 43s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 20 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 19s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m 40s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/17/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 10 new + 18 unchanged - 0 fixed = 28 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 27s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/17/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javadoc  |   0m 25s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/17/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  spotbugs  |   1m  7s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/17/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  34m 10s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 21s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 142m 19s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Write to static field org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:[line 78] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$ClientVector doesn't override java.util.Vector.equals(Object)  At KeepAliveCache.java:At KeepAliveCache.java:[line 1] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/17/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 4bdcc6218f79 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 379b3ae672adef72a04624da60188c68434d418c |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/17/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/17/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1542757101


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/kac/KeepAliveCache.java:
##########
@@ -0,0 +1,317 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services.kac;
+
+import java.io.IOException;
+import java.io.NotSerializableException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.http.HttpClientConnection;
+import org.apache.http.conn.routing.HttpRoute;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_MAX_CONN_SYS_PROP;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_MAX_CONN_SYS_PROP;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.KAC_CONN_TTL;
+
+/**
+ * Connection-pooling heuristics adapted from JDK's connection pooling `KeepAliveCache`
+ * <p>
+ * Why this implementation is required in comparison to {@link org.apache.http.impl.conn.PoolingHttpClientConnectionManager}
+ * connection-pooling:
+ * <ol>
+ * <li>PoolingHttpClientConnectionManager heuristic caches all the reusable connections it has created.
+ * JDK's implementation only caches limited number of connections. The limit is given by JVM system
+ * property "http.maxConnections". If there is no system-property, it defaults to 5.</li>
+ * <li>In PoolingHttpClientConnectionManager, it expects the application to provide `setMaxPerRoute` and `setMaxTotal`,
+ * which the implementation uses as the total number of connections it can create. For application using ABFS, it is not
+ * feasible to provide a value in the initialisation of the connectionManager. JDK's implementation has no cap on the
+ * number of connections it can create.</li>

Review Comment:
   what is the cap in the case of apache client, how many connections it can create and how many connections it can cache ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1542941161


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestApacheHttpClientFallback.java:
##########
@@ -0,0 +1,205 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.net.URL;
+import java.util.ArrayList;
+
+import org.assertj.core.api.Assertions;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsTestWithTimeout;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.FSOperationType;
+import org.apache.hadoop.fs.azurebfs.utils.TracingContext;
+import org.apache.hadoop.fs.azurebfs.utils.TracingHeaderFormat;
+
+import static java.net.HttpURLConnection.HTTP_OK;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.JDK_FALLBACK;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.JDK_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.DEFAULT_APACHE_HTTP_CLIENT_MAX_IO_EXCEPTION_RETRIES;
+import static org.apache.hadoop.fs.azurebfs.services.HttpOperationType.APACHE_HTTP_CLIENT;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+
+public class TestApacheHttpClientFallback extends AbstractAbfsTestWithTimeout {
+
+  public TestApacheHttpClientFallback() throws Exception {
+    super();
+  }
+
+  private TracingContext getSampleTracingContext() {
+    String correlationId, fsId;
+    TracingHeaderFormat format;
+    correlationId = "test-corr-id";
+    fsId = "test-filesystem-id";
+    format = TracingHeaderFormat.ALL_ID_FORMAT;
+    TracingContext tc = Mockito.spy(new TracingContext(correlationId, fsId,
+        FSOperationType.TEST_OP, true, format, null));
+    Mockito.doAnswer(answer -> {
+          answer.callRealMethod();
+          HttpOperation op = answer.getArgument(0);
+          if (op instanceof AbfsAHCHttpOperation) {
+            Assertions.assertThat(tc.getHeader()).endsWith(APACHE_IMPL);
+          }
+          if (op instanceof AbfsHttpOperation) {
+            if (ApacheHttpClientHealthMonitor.usable()) {
+              Assertions.assertThat(tc.getHeader()).endsWith(JDK_IMPL);
+            } else {
+              Assertions.assertThat(tc.getHeader()).endsWith(JDK_FALLBACK);
+            }
+          }
+          return null;
+        })
+        .when(tc)
+        .constructHeader(Mockito.any(HttpOperation.class),
+            Mockito.nullable(String.class), Mockito.nullable(String.class));
+    return tc;
+  }
+
+  @Test
+  public void testMultipleFailureLeadToFallback()
+      throws Exception {
+    TracingContext tc = getSampleTracingContext();
+    int[] retryIteration = {0};
+    intercept(IOException.class, () -> {
+      getMockRestOperation(retryIteration).execute(tc);
+    });
+    intercept(IOException.class, () -> {
+      getMockRestOperation(retryIteration).execute(tc);
+    });
+  }
+
+  private AbfsRestOperation getMockRestOperation(int[] retryIteration)
+      throws IOException {
+    AbfsConfiguration configuration = Mockito.mock(AbfsConfiguration.class);
+    Mockito.doReturn(APACHE_HTTP_CLIENT)
+        .when(configuration)
+        .getPreferredHttpOperationType();
+    Mockito.doReturn(DEFAULT_APACHE_HTTP_CLIENT_MAX_IO_EXCEPTION_RETRIES)
+        .when(configuration)
+        .getMaxApacheHttpClientIoExceptions();
+    AbfsClient client = Mockito.mock(AbfsClient.class);
+    Mockito.doReturn(Mockito.mock(ExponentialRetryPolicy.class))
+        .when(client)
+        .getExponentialRetryPolicy();
+
+    AbfsRetryPolicy retryPolicy = Mockito.mock(AbfsRetryPolicy.class);
+    Mockito.doReturn(retryPolicy)
+        .when(client)
+        .getRetryPolicy(Mockito.nullable(String.class));
+
+    Mockito.doAnswer(answer -> {
+          if (retryIteration[0]
+              < DEFAULT_APACHE_HTTP_CLIENT_MAX_IO_EXCEPTION_RETRIES) {
+            retryIteration[0]++;
+            return true;
+          } else {
+            return false;
+          }
+        })
+        .when(retryPolicy)
+        .shouldRetry(Mockito.anyInt(), Mockito.nullable(Integer.class));
+
+    AbfsThrottlingIntercept abfsThrottlingIntercept = Mockito.mock(
+        AbfsThrottlingIntercept.class);
+    Mockito.doNothing()
+        .when(abfsThrottlingIntercept)
+        .updateMetrics(Mockito.any(AbfsRestOperationType.class),
+            Mockito.any(HttpOperation.class));
+    Mockito.doNothing()
+        .when(abfsThrottlingIntercept)
+        .sendingRequest(Mockito.any(AbfsRestOperationType.class),
+            Mockito.nullable(AbfsCounters.class));
+    Mockito.doReturn(abfsThrottlingIntercept).when(client).getIntercept();
+
+
+    AbfsRestOperation op = Mockito.spy(new AbfsRestOperation(
+        AbfsRestOperationType.ReadFile,
+        client,
+        AbfsHttpConstants.HTTP_METHOD_GET,
+        new URL("http://localhost"),
+        new ArrayList<>(),
+        null,
+        configuration,
+        "clientId"
+    ));
+
+    Mockito.doReturn(null).when(op).getClientLatency();
+
+    Mockito.doReturn(createApacheHttpOp())
+        .when(op)
+        .createAbfsHttpOperation();
+    Mockito.doReturn(createAhcHttpOp())
+        .when(op)
+        .createAbfsAHCHttpOperation();
+
+    Mockito.doAnswer(answer -> {
+      return answer.getArgument(0);
+    }).when(op).createNewTracingContext(Mockito.nullable(TracingContext.class));
+
+    Mockito.doNothing()
+        .when(op)
+        .signRequest(Mockito.any(HttpOperation.class), Mockito.anyInt());
+
+    Mockito.doAnswer(answer -> {
+      HttpOperation operation = Mockito.spy(
+          (HttpOperation) answer.callRealMethod());
+      Assertions.assertThat(operation).isInstanceOf(
+          retryIteration[0]
+              < DEFAULT_APACHE_HTTP_CLIENT_MAX_IO_EXCEPTION_RETRIES
+              ? AbfsAHCHttpOperation.class
+              : AbfsHttpOperation.class);
+      Mockito.doReturn(HTTP_OK).when(operation).getStatusCode();
+      Mockito.doThrow(new IOException("Test Exception"))
+          .when(operation)
+          .processResponse(Mockito.nullable(byte[].class), Mockito.anyInt(),
+              Mockito.anyInt());
+      Mockito.doCallRealMethod().when(operation).getTracingContextSuffix();
+      return operation;
+    }).when(op).createHttpOperation();
+    return op;
+  }
+
+  private AbfsAHCHttpOperation createAhcHttpOp() {
+    AbfsAHCHttpOperation ahcOp = Mockito.mock(AbfsAHCHttpOperation.class);
+    Mockito.doCallRealMethod().when(ahcOp).getTracingContextSuffix();
+    return ahcOp;
+  }
+
+  private AbfsHttpOperation createApacheHttpOp() {
+    AbfsHttpOperation httpOperationMock = Mockito.mock(AbfsHttpOperation.class);
+    Mockito.doCallRealMethod()
+        .when(httpOperationMock)
+        .getTracingContextSuffix();
+    return httpOperationMock;
+  }
+
+  @Test
+  public void testTcHeaderOnJDKClientUse() {
+    TracingContext tc = getSampleTracingContext();
+    AbfsHttpOperation op = Mockito.mock(AbfsHttpOperation.class);
+    Mockito.doCallRealMethod().when(op).getTracingContextSuffix();
+    tc.constructHeader(op, null, null);
+  }

Review Comment:
   Can add a test to verify that after fallback all requests are using JDK_CLIENT



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1549455593


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ApacheHttpClientHealthMonitor.java:
##########
@@ -0,0 +1,33 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+
+public final class ApacheHttpClientHealthMonitor {

Review Comment:
   Have removed the Monitor class, and have kept logic in the AbfsApacheHttpClient. I am not inclined to keep the new classes as internal for following reasons:
   1. the classes are big and internalizing them will lead to big code files
   2. static-inner class would be difficult to handle with mockito.
   3. Each of the new class are encapsulating some logic and are not just object description.
   
   Would be insightful to know your thoughts on this. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1547565825


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {
+    abfsHttpClientContext = setFinalAbfsClientContext();
+    HttpResponse response = abfsApacheHttpClient.execute(httpRequestBase,
+        abfsHttpClientContext);
+    setConnectionTimeMs(abfsHttpClientContext.getConnectTime());
+    setSendRequestTimeMs(abfsHttpClientContext.getSendTime());
+    setRecvResponseTimeMs(abfsHttpClientContext.getReadTime());
+    return response;
+  }
+
+  private Map<String, List<String>> getResponseHeaders(final HttpResponse httpResponse) {
+    if (httpResponse == null || httpResponse.getAllHeaders() == null) {

Review Comment:
   Response headers are mapped to a response. Each response would have different response headers.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1548855987


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/HttpOperation.java:
##########
@@ -0,0 +1,510 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.List;
+import java.util.Map;
+
+import com.fasterxml.jackson.core.JsonFactory;
+import com.fasterxml.jackson.core.JsonParser;
+import com.fasterxml.jackson.core.JsonToken;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.slf4j.Logger;
+
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.services.AbfsPerfLoggable;
+import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema;
+import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
+
+/**
+ * Base Http operation class for orchestrating server IO calls. Child classes would
+ * define the certain orchestration implementation on the basis of network library used.
+ * <p>
+ * For JDK netlib usage, the child class would be {@link AbfsHttpOperation}. <br>
+ * For ApacheHttpClient netlib usage, the child class would be {@link AbfsAHCHttpOperation}.
+ * </p>
+ */
+public abstract class HttpOperation implements AbfsPerfLoggable {
+
+  private final Logger log;
+
+  private static final int CLEAN_UP_BUFFER_SIZE = 64 * 1024;
+
+  private static final int ONE_THOUSAND = 1000;
+
+  private static final int ONE_MILLION = ONE_THOUSAND * ONE_THOUSAND;
+
+  private String method;
+
+  private URL url;
+
+  private String maskedUrl;
+
+  private String maskedEncodedUrl;
+
+  private int statusCode;
+
+  private String statusDescription;
+
+  private String storageErrorCode = "";
+
+  private String storageErrorMessage = "";
+
+  private String requestId = "";
+
+  private String expectedAppendPos = "";
+
+  private ListResultSchema listResultSchema = null;
+
+  // metrics
+  private int bytesSent;
+
+  private int expectedBytesToBeSent;
+
+  private long bytesReceived;
+
+  private long connectionTimeMs;
+
+  private long sendRequestTimeMs;
+
+  private long recvResponseTimeMs;
+
+  private boolean shouldMask = false;
+
+  public HttpOperation(Logger logger,
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    this.log = logger;
+    this.url = url;
+    this.method = method;
+    this.statusCode = httpStatus;
+  }
+
+  public HttpOperation(final Logger log, final URL url, final String method) {
+    this.log = log;
+    this.url = url;
+    this.method = method;
+  }
+
+  public String getMethod() {
+    return method;
+  }
+
+  public String getHost() {
+    return url.getHost();
+  }
+
+  public int getStatusCode() {
+    return statusCode;
+  }
+
+  public String getStatusDescription() {
+    return statusDescription;
+  }
+
+  public String getStorageErrorCode() {
+    return storageErrorCode;
+  }
+
+  public String getStorageErrorMessage() {
+    return storageErrorMessage;
+  }
+
+  public abstract String getClientRequestId();
+
+  public String getExpectedAppendPos() {
+    return expectedAppendPos;
+  }
+
+  public String getRequestId() {
+    return requestId;
+  }
+
+  public void setMaskForSAS() {
+    shouldMask = true;
+  }
+
+  public int getBytesSent() {
+    return bytesSent;
+  }
+
+  public int getExpectedBytesToBeSent() {
+    return expectedBytesToBeSent;
+  }
+
+  public long getBytesReceived() {
+    return bytesReceived;
+  }
+
+  public URL getUrl() {
+    return url;
+  }
+
+  public ListResultSchema getListResultSchema() {
+    return listResultSchema;
+  }
+
+  public abstract String getResponseHeader(String httpHeader);
+
+  void setExpectedBytesToBeSent(int expectedBytesToBeSent) {
+    this.expectedBytesToBeSent = expectedBytesToBeSent;
+  }
+
+  void setStatusCode(int statusCode) {
+    this.statusCode = statusCode;
+  }
+
+  void setStatusDescription(String statusDescription) {
+    this.statusDescription = statusDescription;
+  }
+
+  void setBytesSent(int bytesSent) {
+    this.bytesSent = bytesSent;
+  }
+
+  void setSendRequestTimeMs(long sendRequestTimeMs) {
+    this.sendRequestTimeMs = sendRequestTimeMs;
+  }
+
+  void setRecvResponseTimeMs(long recvResponseTimeMs) {
+    this.recvResponseTimeMs = recvResponseTimeMs;
+  }
+
+  void setRequestId(String requestId) {
+    this.requestId = requestId;
+  }
+
+  void setConnectionTimeMs(long connectionTimeMs) {
+    this.connectionTimeMs = connectionTimeMs;
+  }
+
+  // Returns a trace message for the request
+  @Override
+  public String toString() {
+    final StringBuilder sb = new StringBuilder();
+    sb.append(statusCode);
+    sb.append(",");
+    sb.append(storageErrorCode);
+    sb.append(",");
+    sb.append(expectedAppendPos);
+    sb.append(",cid=");
+    sb.append(getClientRequestId());
+    sb.append(",rid=");
+    sb.append(requestId);
+    sb.append(",connMs=");
+    sb.append(connectionTimeMs);
+    sb.append(",sendMs=");
+    sb.append(sendRequestTimeMs);
+    sb.append(",recvMs=");
+    sb.append(recvResponseTimeMs);
+    sb.append(",sent=");
+    sb.append(bytesSent);
+    sb.append(",recv=");
+    sb.append(bytesReceived);
+    sb.append(",");
+    sb.append(method);
+    sb.append(",");
+    sb.append(getMaskedUrl());
+    return sb.toString();
+  }
+
+  // Returns a trace message for the ABFS API logging service to consume
+  public String getLogString() {
+
+    final StringBuilder sb = new StringBuilder();
+    sb.append("s=")
+        .append(statusCode)
+        .append(" e=")
+        .append(storageErrorCode)
+        .append(" ci=")
+        .append(getClientRequestId())
+        .append(" ri=")
+        .append(requestId)
+
+        .append(" ct=")
+        .append(connectionTimeMs)
+        .append(" st=")
+        .append(sendRequestTimeMs)
+        .append(" rt=")
+        .append(recvResponseTimeMs)
+
+        .append(" bs=")
+        .append(bytesSent)
+        .append(" br=")
+        .append(bytesReceived)
+        .append(" m=")
+        .append(method)
+        .append(" u=")
+        .append(getMaskedEncodedUrl());
+
+    return sb.toString();
+  }
+
+  public String getMaskedUrl() {
+    if (!shouldMask) {
+      return url.toString();
+    }
+    if (maskedUrl != null) {
+      return maskedUrl;
+    }
+    maskedUrl = UriUtils.getMaskedUrl(url);
+    return maskedUrl;
+  }
+
+  public String getMaskedEncodedUrl() {
+    if (maskedEncodedUrl != null) {
+      return maskedEncodedUrl;
+    }
+    maskedEncodedUrl = UriUtils.encodedUrlStr(getMaskedUrl());
+    return maskedEncodedUrl;
+  }
+
+  public abstract void sendPayload(byte[] buffer, int offset, int length) throws
+      IOException;
+
+  public abstract void processResponse(byte[] buffer,
+      int offset,
+      int length) throws IOException;
+
+  public abstract void setRequestProperty(String key, String value);
+
+  void parseResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    long startTime;
+    if (AbfsHttpConstants.HTTP_METHOD_HEAD.equals(this.method)) {
+      // If it is HEAD, and it is ERROR
+      return;
+    }
+
+    startTime = System.nanoTime();
+
+    if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) {
+      processStorageErrorResponse();
+      this.recvResponseTimeMs += elapsedTimeMs(startTime);
+      String contentLength = getResponseHeader(
+          HttpHeaderConfigurations.CONTENT_LENGTH);
+      if (contentLength != null) {
+        this.bytesReceived = Long.parseLong(contentLength);
+      } else {
+        this.bytesReceived = 0L;
+      }
+
+    } else {
+      // consume the input stream to release resources
+      int totalBytesRead = 0;
+
+      try (InputStream stream = getContentInputStream()) {
+        if (isNullInputStream(stream)) {
+          return;
+        }
+        boolean endOfStream = false;
+
+        // this is a list operation and need to retrieve the data
+        // need a better solution
+        if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method)
+            && buffer == null) {
+          parseListFilesResponse(stream);
+        } else {
+          if (buffer != null) {
+            while (totalBytesRead < length) {
+              int bytesRead = stream.read(buffer, offset + totalBytesRead,
+                  length
+                      - totalBytesRead);
+              if (bytesRead == -1) {
+                endOfStream = true;
+                break;
+              }
+              totalBytesRead += bytesRead;
+            }
+          }
+          if (!endOfStream && stream.read() != -1) {

Review Comment:
   I understand that the rename of AbfsHttpOperation to HttpOperation has generated this git difference. To mitigate confusion and reduce git difference, have kept the abstract class name as AbfsHttpOperation, and child classes as AbfsAhcHttpOperation and AbfsJdkHttpOperation.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1546156603


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/kac/TestApacheClientConnectionPool.java:
##########
@@ -0,0 +1,129 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services.kac;
+
+import java.io.IOException;
+
+import org.junit.Assert;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsTestWithTimeout;
+import org.apache.http.HttpClientConnection;
+import org.apache.http.HttpHost;
+import org.apache.http.conn.routing.HttpRoute;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_MAX_CONN_SYS_PROP;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_MAX_CONN_SYS_PROP;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.KAC_CONN_TTL;
+
+public class TestApacheClientConnectionPool extends
+    AbstractAbfsTestWithTimeout {
+
+  public TestApacheClientConnectionPool() throws Exception {
+    super();
+  }
+
+  @Test
+  public void testBasicPool() throws IOException {
+    System.clearProperty(HTTP_MAX_CONN_SYS_PROP);
+    validatePoolSize(DEFAULT_MAX_CONN_SYS_PROP);
+  }
+
+  @Test
+  public void testSysPropAppliedPool() throws IOException {
+    final String customPoolSize = "10";
+    System.setProperty(HTTP_MAX_CONN_SYS_PROP, customPoolSize);
+    validatePoolSize(Integer.parseInt(customPoolSize));
+  }
+
+  private void validatePoolSize(int size) throws IOException {
+    KeepAliveCache keepAliveCache = KeepAliveCache.getInstance();
+    final HttpRoute routes = new HttpRoute(new HttpHost("localhost"));
+    final HttpClientConnection[] connections = new HttpClientConnection[size * 2];
+
+    for (int i = 0; i < size * 2; i++) {
+      connections[i] = Mockito.mock(HttpClientConnection.class);
+    }
+
+    for (int i = 0; i < size * 2; i++) {
+      keepAliveCache.put(routes, connections[i]);
+    }
+
+    for (int i = size; i < size * 2; i++) {
+      Mockito.verify(connections[i], Mockito.times(1)).close();
+    }
+
+    for (int i = 0; i < size * 2; i++) {
+      if (i < size) {
+        Assert.assertNotNull(keepAliveCache.get(routes));
+      } else {
+        Assert.assertNull(keepAliveCache.get(routes));
+      }
+    }
+    System.clearProperty(HTTP_MAX_CONN_SYS_PROP);
+    keepAliveCache.close();
+  }
+
+  @Test
+  public void testKeepAliveCache() throws IOException {
+    KeepAliveCache keepAliveCache = KeepAliveCache.getInstance();
+    final HttpRoute routes = new HttpRoute(new HttpHost("localhost"));
+    HttpClientConnection connection = Mockito.mock(HttpClientConnection.class);
+
+    keepAliveCache.put(routes, connection);
+
+    Assert.assertNotNull(keepAliveCache.get(routes));
+    keepAliveCache.put(routes, connection);
+
+    final HttpRoute routes1 = new HttpRoute(new HttpHost("localhost1"));
+    Assert.assertNull(keepAliveCache.get(routes1));
+    keepAliveCache.close();
+  }
+
+  @Test
+  public void testKeepAliveCacheCleanup() throws Exception {
+    KeepAliveCache keepAliveCache = KeepAliveCache.getInstance();
+    final HttpRoute routes = new HttpRoute(new HttpHost("localhost"));
+    HttpClientConnection connection = Mockito.mock(HttpClientConnection.class);
+    keepAliveCache.put(routes, connection);
+
+    Thread.sleep(2 * KAC_CONN_TTL);
+    Mockito.verify(connection, Mockito.times(1)).close();
+    Assert.assertNull(keepAliveCache.get(routes));
+    Mockito.verify(connection, Mockito.times(1)).close();
+    keepAliveCache.close();
+  }
+
+  @Test
+  public void testKeepAliveCacheCleanupWithConnections() throws Exception {
+    KeepAliveCache keepAliveCache = KeepAliveCache.getInstance();
+    keepAliveCache.pauseThread();
+    final HttpRoute routes = new HttpRoute(new HttpHost("localhost"));
+    HttpClientConnection connection = Mockito.mock(HttpClientConnection.class);
+    keepAliveCache.put(routes, connection);
+
+    Thread.sleep(2 * KAC_CONN_TTL);
+    Mockito.verify(connection, Mockito.times(0)).close();
+    Assert.assertNull(keepAliveCache.get(routes));
+    Mockito.verify(connection, Mockito.times(1)).close();
+    keepAliveCache.close();
+  }
+}

Review Comment:
   1. Added new test which would do:
       - Verify KAC behavior for multiple route in parallel
       - Verify KAC behavior for one route, for which put and get methods are getting called in parallel.
   2. Yes. Added a test to verify recache.
   3. Yes, already there.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1535434260


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {
+    abfsHttpClientContext = setFinalAbfsClientContext();
+    HttpResponse response = abfsApacheHttpClient.execute(httpRequestBase,
+        abfsHttpClientContext);
+    setConnectionTimeMs(abfsHttpClientContext.getConnectTime());
+    setSendRequestTimeMs(abfsHttpClientContext.getSendTime());
+    setRecvResponseTimeMs(abfsHttpClientContext.getReadTime());
+    return response;
+  }
+
+  private Map<String, List<String>> getResponseHeaders(final HttpResponse httpResponse) {
+    if (httpResponse == null || httpResponse.getAllHeaders() == null) {
+      return new HashMap<>();
+    }
+    Map<String, List<String>> map = new HashMap<>();
+    for (Header header : httpResponse.getAllHeaders()) {
+      map.put(header.getName(), new ArrayList<String>(
+          Collections.singleton(header.getValue())));
+    }
+    return map;
+  }
+
+  @Override
+  public void setRequestProperty(final String key, final String value) {
+    setHeader(key, value);
+  }
+
+  @Override
+  Map<String, List<String>> getRequestProperties() {
+    Map<String, List<String>> map = new HashMap<>();
+    for (AbfsHttpHeader header : requestHeaders) {
+      map.put(header.getName(),
+          new ArrayList<String>() {{
+            add(header.getValue());
+          }});
+    }
+    return map;
+  }
+
+  @Override
+  public String getResponseHeader(final String headerName) {
+    if (httpResponse == null) {
+      return null;
+    }
+    Header header = httpResponse.getFirstHeader(headerName);
+    if (header != null) {
+      return header.getValue();
+    }
+    return null;
+  }
+
+  @Override
+  InputStream getContentInputStream()
+      throws IOException {
+    if (httpResponse == null) {
+      return null;
+    }
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity != null) {
+      return httpResponse.getEntity().getContent();
+    }
+    return null;
+  }
+
+  public void sendPayload(final byte[] buffer,
+      final int offset,
+      final int length)
+      throws IOException {
+    if (!isPayloadRequest) {
+      return;
+    }
+
+    if (HTTP_METHOD_PUT.equals(getMethod())) {

Review Comment:
   use switch case in place of multiple if statements



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1535487950


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {
+    abfsHttpClientContext = setFinalAbfsClientContext();
+    HttpResponse response = abfsApacheHttpClient.execute(httpRequestBase,
+        abfsHttpClientContext);
+    setConnectionTimeMs(abfsHttpClientContext.getConnectTime());
+    setSendRequestTimeMs(abfsHttpClientContext.getSendTime());
+    setRecvResponseTimeMs(abfsHttpClientContext.getReadTime());
+    return response;
+  }
+
+  private Map<String, List<String>> getResponseHeaders(final HttpResponse httpResponse) {
+    if (httpResponse == null || httpResponse.getAllHeaders() == null) {
+      return new HashMap<>();
+    }
+    Map<String, List<String>> map = new HashMap<>();
+    for (Header header : httpResponse.getAllHeaders()) {
+      map.put(header.getName(), new ArrayList<String>(
+          Collections.singleton(header.getValue())));
+    }
+    return map;
+  }
+
+  @Override
+  public void setRequestProperty(final String key, final String value) {
+    setHeader(key, value);
+  }
+
+  @Override
+  Map<String, List<String>> getRequestProperties() {
+    Map<String, List<String>> map = new HashMap<>();
+    for (AbfsHttpHeader header : requestHeaders) {
+      map.put(header.getName(),
+          new ArrayList<String>() {{
+            add(header.getValue());
+          }});
+    }
+    return map;
+  }
+
+  @Override
+  public String getResponseHeader(final String headerName) {
+    if (httpResponse == null) {
+      return null;
+    }
+    Header header = httpResponse.getFirstHeader(headerName);
+    if (header != null) {
+      return header.getValue();
+    }
+    return null;
+  }
+
+  @Override
+  InputStream getContentInputStream()
+      throws IOException {
+    if (httpResponse == null) {
+      return null;
+    }
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity != null) {
+      return httpResponse.getEntity().getContent();
+    }
+    return null;
+  }
+
+  public void sendPayload(final byte[] buffer,
+      final int offset,
+      final int length)
+      throws IOException {
+    if (!isPayloadRequest) {
+      return;
+    }
+
+    if (HTTP_METHOD_PUT.equals(getMethod())) {
+      httpRequestBase = new HttpPut(getUri());
+    }
+    if (HTTP_METHOD_PATCH.equals(getMethod())) {
+      httpRequestBase = new HttpPatch(getUri());
+    }
+    if (HTTP_METHOD_POST.equals(getMethod())) {
+      httpRequestBase = new HttpPost(getUri());
+    }
+
+    setExpectedBytesToBeSent(length);
+    if (buffer != null) {
+      HttpEntity httpEntity = new ByteArrayEntity(buffer, offset, length,
+          TEXT_PLAIN);
+      ((HttpEntityEnclosingRequestBase) httpRequestBase).setEntity(
+          httpEntity);
+    }
+
+    translateHeaders(httpRequestBase, requestHeaders);
+    try {
+      httpResponse = executeRequest();
+    } catch (AbfsApacheHttpExpect100Exception ex) {
+      LOG.debug(
+          "Getting output stream failed with expect header enabled, returning back ",
+          ex);
+      connectionDisconnectedOnError = true;
+      httpResponse = ex.getHttpResponse();
+      abfsApacheHttpExpect100Exception = ex;
+    } finally {
+      if (!connectionDisconnectedOnError
+          && httpRequestBase instanceof HttpEntityEnclosingRequestBase) {
+        setBytesSent(length);
+      }
+    }
+  }
+
+  private void prepareRequest() throws IOException {
+    if (HTTP_METHOD_GET.equals(getMethod())) {
+      httpRequestBase = new HttpGet(getUri());
+    }
+    if (HTTP_METHOD_DELETE.equals(getMethod())) {
+      httpRequestBase = new HttpDelete(getUri());
+    }
+    if (HTTP_METHOD_HEAD.equals(getMethod())) {
+      httpRequestBase = new HttpHead(getUri());
+    }
+    translateHeaders(httpRequestBase, requestHeaders);
+  }
+
+  private URI getUri() throws IOException {
+    try {
+      return getUrl().toURI();
+    } catch (URISyntaxException e) {
+      throw new IOException(e);
+    }
+  }
+
+  private void translateHeaders(final HttpRequestBase httpRequestBase,
+      final List<AbfsHttpHeader> requestHeaders) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      httpRequestBase.setHeader(header.getName(), header.getValue());
+    }
+  }
+
+  public void setHeader(String name, String val) {
+    requestHeaders.add(new AbfsHttpHeader(name, val));
+  }
+
+  @Override
+  public String getRequestProperty(String name) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(name)) {
+        return header.getValue();
+      }
+    }
+    return "";

Review Comment:
   empty string constant can be used



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1547600380


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsApacheHttpClient.java:
##########
@@ -0,0 +1,93 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.config.RequestConfig;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.config.Registry;
+import org.apache.http.config.RegistryBuilder;
+import org.apache.http.conn.socket.ConnectionSocketFactory;
+import org.apache.http.conn.socket.PlainConnectionSocketFactory;
+import org.apache.http.conn.ssl.SSLConnectionSocketFactory;
+import org.apache.http.impl.client.CloseableHttpClient;
+import org.apache.http.impl.client.HttpClientBuilder;
+import org.apache.http.impl.client.HttpClients;
+
+import static org.apache.http.conn.ssl.SSLConnectionSocketFactory.getDefaultHostnameVerifier;
+
+public class AbfsApacheHttpClient {
+  private final CloseableHttpClient httpClient;
+
+  private final AbfsConfiguration abfsConfiguration;
+
+  public AbfsApacheHttpClient(DelegatingSSLSocketFactory delegatingSSLSocketFactory,
+      final AbfsConfiguration abfsConfiguration) {
+    this.abfsConfiguration = abfsConfiguration;
+    final AbfsConnectionManager connMgr = new AbfsConnectionManager(
+        createSocketFactoryRegistry(
+            new SSLConnectionSocketFactory(delegatingSSLSocketFactory,
+                getDefaultHostnameVerifier())),
+        new org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory());

Review Comment:
   taken.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1547597777


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##########
@@ -363,6 +364,10 @@ public class AbfsConfiguration{
       FS_AZURE_ABFS_ENABLE_CHECKSUM_VALIDATION, DefaultValue = DEFAULT_ENABLE_ABFS_CHECKSUM_VALIDATION)
   private boolean isChecksumValidationEnabled;
 
+  @IntegerConfigurationValidatorAnnotation(ConfigurationKey =
+      FS_AZURE_APACHE_HTTP_CLIENT_MAX_IO_EXCEPTION_RETRIES, DefaultValue = DEFAULT_APACHE_HTTP_CLIENT_MAX_IO_EXCEPTION_RETRIES)
+  private int maxApacheHttpClientIoExceptions;

Review Comment:
   refactored to maxApacheHttpClientIoExceptionsRetries;
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1547572833


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {
+    abfsHttpClientContext = setFinalAbfsClientContext();
+    HttpResponse response = abfsApacheHttpClient.execute(httpRequestBase,
+        abfsHttpClientContext);
+    setConnectionTimeMs(abfsHttpClientContext.getConnectTime());
+    setSendRequestTimeMs(abfsHttpClientContext.getSendTime());
+    setRecvResponseTimeMs(abfsHttpClientContext.getReadTime());
+    return response;
+  }
+
+  private Map<String, List<String>> getResponseHeaders(final HttpResponse httpResponse) {
+    if (httpResponse == null || httpResponse.getAllHeaders() == null) {
+      return new HashMap<>();
+    }
+    Map<String, List<String>> map = new HashMap<>();
+    for (Header header : httpResponse.getAllHeaders()) {
+      map.put(header.getName(), new ArrayList<String>(
+          Collections.singleton(header.getValue())));
+    }
+    return map;
+  }
+
+  @Override
+  public void setRequestProperty(final String key, final String value) {
+    setHeader(key, value);
+  }
+
+  @Override
+  Map<String, List<String>> getRequestProperties() {
+    Map<String, List<String>> map = new HashMap<>();
+    for (AbfsHttpHeader header : requestHeaders) {
+      map.put(header.getName(),
+          new ArrayList<String>() {{
+            add(header.getValue());
+          }});
+    }
+    return map;
+  }
+
+  @Override
+  public String getResponseHeader(final String headerName) {
+    if (httpResponse == null) {
+      return null;
+    }
+    Header header = httpResponse.getFirstHeader(headerName);
+    if (header != null) {
+      return header.getValue();
+    }
+    return null;
+  }
+
+  @Override
+  InputStream getContentInputStream()
+      throws IOException {
+    if (httpResponse == null) {
+      return null;
+    }
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity != null) {
+      return httpResponse.getEntity().getContent();
+    }
+    return null;
+  }
+
+  public void sendPayload(final byte[] buffer,
+      final int offset,
+      final int length)
+      throws IOException {
+    if (!isPayloadRequest) {
+      return;
+    }
+
+    if (HTTP_METHOD_PUT.equals(getMethod())) {
+      httpRequestBase = new HttpPut(getUri());
+    }
+    if (HTTP_METHOD_PATCH.equals(getMethod())) {
+      httpRequestBase = new HttpPatch(getUri());
+    }
+    if (HTTP_METHOD_POST.equals(getMethod())) {
+      httpRequestBase = new HttpPost(getUri());
+    }
+
+    setExpectedBytesToBeSent(length);
+    if (buffer != null) {
+      HttpEntity httpEntity = new ByteArrayEntity(buffer, offset, length,
+          TEXT_PLAIN);
+      ((HttpEntityEnclosingRequestBase) httpRequestBase).setEntity(
+          httpEntity);
+    }
+
+    translateHeaders(httpRequestBase, requestHeaders);
+    try {
+      httpResponse = executeRequest();
+    } catch (AbfsApacheHttpExpect100Exception ex) {
+      LOG.debug(
+          "Getting output stream failed with expect header enabled, returning back ",
+          ex);
+      connectionDisconnectedOnError = true;
+      httpResponse = ex.getHttpResponse();
+      abfsApacheHttpExpect100Exception = ex;
+    } finally {
+      if (!connectionDisconnectedOnError
+          && httpRequestBase instanceof HttpEntityEnclosingRequestBase) {

Review Comment:
   HttpRequestBase is not having a close method.



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {
+    abfsHttpClientContext = setFinalAbfsClientContext();
+    HttpResponse response = abfsApacheHttpClient.execute(httpRequestBase,
+        abfsHttpClientContext);
+    setConnectionTimeMs(abfsHttpClientContext.getConnectTime());
+    setSendRequestTimeMs(abfsHttpClientContext.getSendTime());
+    setRecvResponseTimeMs(abfsHttpClientContext.getReadTime());
+    return response;
+  }
+
+  private Map<String, List<String>> getResponseHeaders(final HttpResponse httpResponse) {
+    if (httpResponse == null || httpResponse.getAllHeaders() == null) {
+      return new HashMap<>();
+    }
+    Map<String, List<String>> map = new HashMap<>();
+    for (Header header : httpResponse.getAllHeaders()) {
+      map.put(header.getName(), new ArrayList<String>(
+          Collections.singleton(header.getValue())));
+    }
+    return map;
+  }
+
+  @Override
+  public void setRequestProperty(final String key, final String value) {
+    setHeader(key, value);
+  }
+
+  @Override
+  Map<String, List<String>> getRequestProperties() {
+    Map<String, List<String>> map = new HashMap<>();
+    for (AbfsHttpHeader header : requestHeaders) {
+      map.put(header.getName(),
+          new ArrayList<String>() {{
+            add(header.getValue());
+          }});
+    }
+    return map;
+  }
+
+  @Override
+  public String getResponseHeader(final String headerName) {
+    if (httpResponse == null) {
+      return null;
+    }
+    Header header = httpResponse.getFirstHeader(headerName);
+    if (header != null) {
+      return header.getValue();
+    }
+    return null;
+  }
+
+  @Override
+  InputStream getContentInputStream()
+      throws IOException {
+    if (httpResponse == null) {
+      return null;
+    }
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity != null) {
+      return httpResponse.getEntity().getContent();
+    }
+    return null;
+  }
+
+  public void sendPayload(final byte[] buffer,
+      final int offset,
+      final int length)
+      throws IOException {
+    if (!isPayloadRequest) {
+      return;
+    }
+
+    if (HTTP_METHOD_PUT.equals(getMethod())) {
+      httpRequestBase = new HttpPut(getUri());
+    }
+    if (HTTP_METHOD_PATCH.equals(getMethod())) {
+      httpRequestBase = new HttpPatch(getUri());
+    }
+    if (HTTP_METHOD_POST.equals(getMethod())) {
+      httpRequestBase = new HttpPost(getUri());
+    }
+
+    setExpectedBytesToBeSent(length);
+    if (buffer != null) {
+      HttpEntity httpEntity = new ByteArrayEntity(buffer, offset, length,
+          TEXT_PLAIN);
+      ((HttpEntityEnclosingRequestBase) httpRequestBase).setEntity(
+          httpEntity);
+    }
+
+    translateHeaders(httpRequestBase, requestHeaders);
+    try {
+      httpResponse = executeRequest();
+    } catch (AbfsApacheHttpExpect100Exception ex) {
+      LOG.debug(
+          "Getting output stream failed with expect header enabled, returning back ",
+          ex);
+      connectionDisconnectedOnError = true;
+      httpResponse = ex.getHttpResponse();
+      abfsApacheHttpExpect100Exception = ex;
+    } finally {
+      if (!connectionDisconnectedOnError
+          && httpRequestBase instanceof HttpEntityEnclosingRequestBase) {
+        setBytesSent(length);
+      }
+    }
+  }
+
+  private void prepareRequest() throws IOException {

Review Comment:
   taken.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1547582412


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsApacheHttpClient.java:
##########
@@ -0,0 +1,93 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.config.RequestConfig;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.config.Registry;
+import org.apache.http.config.RegistryBuilder;
+import org.apache.http.conn.socket.ConnectionSocketFactory;
+import org.apache.http.conn.socket.PlainConnectionSocketFactory;
+import org.apache.http.conn.ssl.SSLConnectionSocketFactory;
+import org.apache.http.impl.client.CloseableHttpClient;
+import org.apache.http.impl.client.HttpClientBuilder;
+import org.apache.http.impl.client.HttpClients;
+
+import static org.apache.http.conn.ssl.SSLConnectionSocketFactory.getDefaultHostnameVerifier;
+
+public class AbfsApacheHttpClient {
+  private final CloseableHttpClient httpClient;
+
+  private final AbfsConfiguration abfsConfiguration;
+
+  public AbfsApacheHttpClient(DelegatingSSLSocketFactory delegatingSSLSocketFactory,
+      final AbfsConfiguration abfsConfiguration) {
+    this.abfsConfiguration = abfsConfiguration;
+    final AbfsConnectionManager connMgr = new AbfsConnectionManager(
+        createSocketFactoryRegistry(
+            new SSLConnectionSocketFactory(delegatingSSLSocketFactory,
+                getDefaultHostnameVerifier())),
+        new org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory());
+    final HttpClientBuilder builder = HttpClients.custom();
+    builder.setConnectionManager(connMgr)
+        .setRequestExecutor(new AbfsManagedHttpRequestExecutor(
+            abfsConfiguration.getHttpReadTimeout()))
+        .disableContentCompression()
+        .disableRedirectHandling()
+        .disableAutomaticRetries()
+        .setUserAgent(
+            ""); // SDK will set the user agent header in the pipeline. Don't let Apache waste time

Review Comment:
   If we dont give it, apacheHttpClient, will try to read system-property http.agent. Its added to avoid unrequired overhead.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1544295419


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/kac/package-info.java:
##########
@@ -0,0 +1,22 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Evolving
+package org.apache.hadoop.fs.azurebfs.services.kac;

Review Comment:
   needed as new package added.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1548855671


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/HttpOperation.java:
##########
@@ -0,0 +1,510 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.List;
+import java.util.Map;
+
+import com.fasterxml.jackson.core.JsonFactory;
+import com.fasterxml.jackson.core.JsonParser;
+import com.fasterxml.jackson.core.JsonToken;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.slf4j.Logger;
+
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.services.AbfsPerfLoggable;
+import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema;
+import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
+
+/**
+ * Base Http operation class for orchestrating server IO calls. Child classes would
+ * define the certain orchestration implementation on the basis of network library used.
+ * <p>
+ * For JDK netlib usage, the child class would be {@link AbfsHttpOperation}. <br>
+ * For ApacheHttpClient netlib usage, the child class would be {@link AbfsAHCHttpOperation}.
+ * </p>
+ */
+public abstract class HttpOperation implements AbfsPerfLoggable {
+
+  private final Logger log;
+
+  private static final int CLEAN_UP_BUFFER_SIZE = 64 * 1024;
+
+  private static final int ONE_THOUSAND = 1000;
+
+  private static final int ONE_MILLION = ONE_THOUSAND * ONE_THOUSAND;
+
+  private String method;
+
+  private URL url;
+
+  private String maskedUrl;
+
+  private String maskedEncodedUrl;
+
+  private int statusCode;
+
+  private String statusDescription;
+
+  private String storageErrorCode = "";
+
+  private String storageErrorMessage = "";
+
+  private String requestId = "";
+
+  private String expectedAppendPos = "";
+
+  private ListResultSchema listResultSchema = null;
+
+  // metrics
+  private int bytesSent;
+
+  private int expectedBytesToBeSent;
+
+  private long bytesReceived;
+
+  private long connectionTimeMs;
+
+  private long sendRequestTimeMs;
+
+  private long recvResponseTimeMs;
+
+  private boolean shouldMask = false;
+
+  public HttpOperation(Logger logger,
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    this.log = logger;
+    this.url = url;
+    this.method = method;
+    this.statusCode = httpStatus;
+  }
+
+  public HttpOperation(final Logger log, final URL url, final String method) {
+    this.log = log;
+    this.url = url;
+    this.method = method;
+  }
+
+  public String getMethod() {
+    return method;
+  }
+
+  public String getHost() {
+    return url.getHost();
+  }
+
+  public int getStatusCode() {
+    return statusCode;
+  }
+
+  public String getStatusDescription() {
+    return statusDescription;
+  }
+
+  public String getStorageErrorCode() {
+    return storageErrorCode;
+  }
+
+  public String getStorageErrorMessage() {
+    return storageErrorMessage;
+  }
+
+  public abstract String getClientRequestId();
+
+  public String getExpectedAppendPos() {
+    return expectedAppendPos;
+  }
+
+  public String getRequestId() {
+    return requestId;
+  }
+
+  public void setMaskForSAS() {
+    shouldMask = true;
+  }
+
+  public int getBytesSent() {
+    return bytesSent;
+  }
+
+  public int getExpectedBytesToBeSent() {
+    return expectedBytesToBeSent;
+  }
+
+  public long getBytesReceived() {
+    return bytesReceived;
+  }
+
+  public URL getUrl() {
+    return url;
+  }
+
+  public ListResultSchema getListResultSchema() {
+    return listResultSchema;
+  }
+
+  public abstract String getResponseHeader(String httpHeader);
+
+  void setExpectedBytesToBeSent(int expectedBytesToBeSent) {
+    this.expectedBytesToBeSent = expectedBytesToBeSent;
+  }
+
+  void setStatusCode(int statusCode) {
+    this.statusCode = statusCode;
+  }
+
+  void setStatusDescription(String statusDescription) {
+    this.statusDescription = statusDescription;
+  }
+
+  void setBytesSent(int bytesSent) {
+    this.bytesSent = bytesSent;
+  }
+
+  void setSendRequestTimeMs(long sendRequestTimeMs) {
+    this.sendRequestTimeMs = sendRequestTimeMs;
+  }
+
+  void setRecvResponseTimeMs(long recvResponseTimeMs) {
+    this.recvResponseTimeMs = recvResponseTimeMs;
+  }
+
+  void setRequestId(String requestId) {
+    this.requestId = requestId;
+  }
+
+  void setConnectionTimeMs(long connectionTimeMs) {
+    this.connectionTimeMs = connectionTimeMs;
+  }
+
+  // Returns a trace message for the request
+  @Override
+  public String toString() {
+    final StringBuilder sb = new StringBuilder();
+    sb.append(statusCode);
+    sb.append(",");
+    sb.append(storageErrorCode);
+    sb.append(",");
+    sb.append(expectedAppendPos);
+    sb.append(",cid=");
+    sb.append(getClientRequestId());
+    sb.append(",rid=");
+    sb.append(requestId);
+    sb.append(",connMs=");
+    sb.append(connectionTimeMs);
+    sb.append(",sendMs=");
+    sb.append(sendRequestTimeMs);
+    sb.append(",recvMs=");
+    sb.append(recvResponseTimeMs);
+    sb.append(",sent=");
+    sb.append(bytesSent);
+    sb.append(",recv=");
+    sb.append(bytesReceived);
+    sb.append(",");
+    sb.append(method);
+    sb.append(",");
+    sb.append(getMaskedUrl());
+    return sb.toString();
+  }
+
+  // Returns a trace message for the ABFS API logging service to consume
+  public String getLogString() {
+
+    final StringBuilder sb = new StringBuilder();
+    sb.append("s=")
+        .append(statusCode)
+        .append(" e=")
+        .append(storageErrorCode)
+        .append(" ci=")
+        .append(getClientRequestId())
+        .append(" ri=")
+        .append(requestId)
+
+        .append(" ct=")
+        .append(connectionTimeMs)
+        .append(" st=")
+        .append(sendRequestTimeMs)
+        .append(" rt=")
+        .append(recvResponseTimeMs)
+
+        .append(" bs=")
+        .append(bytesSent)
+        .append(" br=")
+        .append(bytesReceived)
+        .append(" m=")
+        .append(method)
+        .append(" u=")
+        .append(getMaskedEncodedUrl());
+
+    return sb.toString();
+  }
+
+  public String getMaskedUrl() {
+    if (!shouldMask) {
+      return url.toString();
+    }
+    if (maskedUrl != null) {
+      return maskedUrl;
+    }
+    maskedUrl = UriUtils.getMaskedUrl(url);
+    return maskedUrl;
+  }
+
+  public String getMaskedEncodedUrl() {
+    if (maskedEncodedUrl != null) {
+      return maskedEncodedUrl;
+    }
+    maskedEncodedUrl = UriUtils.encodedUrlStr(getMaskedUrl());
+    return maskedEncodedUrl;
+  }
+
+  public abstract void sendPayload(byte[] buffer, int offset, int length) throws
+      IOException;
+
+  public abstract void processResponse(byte[] buffer,
+      int offset,
+      int length) throws IOException;
+
+  public abstract void setRequestProperty(String key, String value);
+
+  void parseResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    long startTime;
+    if (AbfsHttpConstants.HTTP_METHOD_HEAD.equals(this.method)) {
+      // If it is HEAD, and it is ERROR
+      return;
+    }
+
+    startTime = System.nanoTime();
+
+    if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) {
+      processStorageErrorResponse();
+      this.recvResponseTimeMs += elapsedTimeMs(startTime);
+      String contentLength = getResponseHeader(
+          HttpHeaderConfigurations.CONTENT_LENGTH);
+      if (contentLength != null) {
+        this.bytesReceived = Long.parseLong(contentLength);
+      } else {
+        this.bytesReceived = 0L;
+      }
+
+    } else {
+      // consume the input stream to release resources
+      int totalBytesRead = 0;
+
+      try (InputStream stream = getContentInputStream()) {
+        if (isNullInputStream(stream)) {
+          return;
+        }
+        boolean endOfStream = false;
+
+        // this is a list operation and need to retrieve the data
+        // need a better solution
+        if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method)
+            && buffer == null) {
+          parseListFilesResponse(stream);
+        } else {
+          if (buffer != null) {
+            while (totalBytesRead < length) {
+              int bytesRead = stream.read(buffer, offset + totalBytesRead,
+                  length
+                      - totalBytesRead);
+              if (bytesRead == -1) {
+                endOfStream = true;
+                break;
+              }
+              totalBytesRead += bytesRead;
+            }
+          }
+          if (!endOfStream && stream.read() != -1) {
+            // read and discard
+            int bytesRead = 0;
+            byte[] b = new byte[CLEAN_UP_BUFFER_SIZE];
+            while ((bytesRead = stream.read(b)) >= 0) {
+              totalBytesRead += bytesRead;
+            }
+          }
+        }
+      } catch (IOException ex) {
+        log.warn("IO/Network error: {} {}: {}",
+            method, getMaskedUrl(), ex.getMessage());
+        log.debug("IO Error: ", ex);

Review Comment:
   I understand that the rename of AbfsHttpOperation to HttpOperation has generated this git difference. To mitigate confusion and reduce git difference, have kept the abstract class name as AbfsHttpOperation, and child classes as AbfsAhcHttpOperation and AbfsJdkHttpOperation.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2027153805

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 22 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 22s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 33s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 54s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/27/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 11 new + 18 unchanged - 0 fixed = 29 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m  7s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/27/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  33m 30s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 128m 41s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Possible doublecheck on org.apache.hadoop.fs.azurebfs.services.AbfsAHCHttpOperation.ABFS_APACHE_HTTP_CLIENT in org.apache.hadoop.fs.azurebfs.services.AbfsAHCHttpOperation.setAbfsApacheHttpClient(AbfsConfiguration)  At AbfsAHCHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsAHCHttpOperation.setAbfsApacheHttpClient(AbfsConfiguration)  At AbfsAHCHttpOperation.java:[lines 123-125] |
   |  |  Switch statement found in org.apache.hadoop.fs.azurebfs.services.AbfsAHCHttpOperation.prepareRequest() where default case is missing  At AbfsAHCHttpOperation.java:where default case is missing  At AbfsAHCHttpOperation.java:[lines 343-351] |
   |  |  Switch statement found in org.apache.hadoop.fs.azurebfs.services.AbfsAHCHttpOperation.sendPayload(byte[], int, int) where default case is missing  At AbfsAHCHttpOperation.java:int, int) where default case is missing  At AbfsAHCHttpOperation.java:[lines 303-311] |
   |  |  Spinning on KeepAliveCache.threadShouldPause in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.run()  At KeepAliveCache.java: At KeepAliveCache.java:[line 102] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/27/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux dff1a5f3428d 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 002677dbdb9331c2b74e981a9c57a9bde5e91576 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/27/testReport/ |
   | Max. process+thread count | 734 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/27/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1546018428


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestApacheHttpClientFallback.java:
##########
@@ -0,0 +1,205 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.net.URL;
+import java.util.ArrayList;
+
+import org.assertj.core.api.Assertions;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsTestWithTimeout;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.FSOperationType;
+import org.apache.hadoop.fs.azurebfs.utils.TracingContext;
+import org.apache.hadoop.fs.azurebfs.utils.TracingHeaderFormat;
+
+import static java.net.HttpURLConnection.HTTP_OK;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.JDK_FALLBACK;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.JDK_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.DEFAULT_APACHE_HTTP_CLIENT_MAX_IO_EXCEPTION_RETRIES;
+import static org.apache.hadoop.fs.azurebfs.services.HttpOperationType.APACHE_HTTP_CLIENT;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+
+public class TestApacheHttpClientFallback extends AbstractAbfsTestWithTimeout {
+
+  public TestApacheHttpClientFallback() throws Exception {
+    super();
+  }
+
+  private TracingContext getSampleTracingContext() {
+    String correlationId, fsId;
+    TracingHeaderFormat format;
+    correlationId = "test-corr-id";
+    fsId = "test-filesystem-id";
+    format = TracingHeaderFormat.ALL_ID_FORMAT;
+    TracingContext tc = Mockito.spy(new TracingContext(correlationId, fsId,
+        FSOperationType.TEST_OP, true, format, null));
+    Mockito.doAnswer(answer -> {
+          answer.callRealMethod();
+          HttpOperation op = answer.getArgument(0);
+          if (op instanceof AbfsAHCHttpOperation) {
+            Assertions.assertThat(tc.getHeader()).endsWith(APACHE_IMPL);
+          }
+          if (op instanceof AbfsHttpOperation) {
+            if (ApacheHttpClientHealthMonitor.usable()) {
+              Assertions.assertThat(tc.getHeader()).endsWith(JDK_IMPL);
+            } else {
+              Assertions.assertThat(tc.getHeader()).endsWith(JDK_FALLBACK);
+            }
+          }
+          return null;
+        })
+        .when(tc)
+        .constructHeader(Mockito.any(HttpOperation.class),
+            Mockito.nullable(String.class), Mockito.nullable(String.class));
+    return tc;
+  }
+
+  @Test
+  public void testMultipleFailureLeadToFallback()
+      throws Exception {
+    TracingContext tc = getSampleTracingContext();
+    int[] retryIteration = {0};
+    intercept(IOException.class, () -> {
+      getMockRestOperation(retryIteration).execute(tc);
+    });
+    intercept(IOException.class, () -> {
+      getMockRestOperation(retryIteration).execute(tc);
+    });
+  }
+
+  private AbfsRestOperation getMockRestOperation(int[] retryIteration)
+      throws IOException {
+    AbfsConfiguration configuration = Mockito.mock(AbfsConfiguration.class);
+    Mockito.doReturn(APACHE_HTTP_CLIENT)
+        .when(configuration)
+        .getPreferredHttpOperationType();
+    Mockito.doReturn(DEFAULT_APACHE_HTTP_CLIENT_MAX_IO_EXCEPTION_RETRIES)
+        .when(configuration)
+        .getMaxApacheHttpClientIoExceptions();
+    AbfsClient client = Mockito.mock(AbfsClient.class);
+    Mockito.doReturn(Mockito.mock(ExponentialRetryPolicy.class))
+        .when(client)
+        .getExponentialRetryPolicy();
+
+    AbfsRetryPolicy retryPolicy = Mockito.mock(AbfsRetryPolicy.class);
+    Mockito.doReturn(retryPolicy)
+        .when(client)
+        .getRetryPolicy(Mockito.nullable(String.class));
+
+    Mockito.doAnswer(answer -> {
+          if (retryIteration[0]
+              < DEFAULT_APACHE_HTTP_CLIENT_MAX_IO_EXCEPTION_RETRIES) {
+            retryIteration[0]++;
+            return true;
+          } else {
+            return false;
+          }
+        })
+        .when(retryPolicy)
+        .shouldRetry(Mockito.anyInt(), Mockito.nullable(Integer.class));
+
+    AbfsThrottlingIntercept abfsThrottlingIntercept = Mockito.mock(
+        AbfsThrottlingIntercept.class);
+    Mockito.doNothing()
+        .when(abfsThrottlingIntercept)
+        .updateMetrics(Mockito.any(AbfsRestOperationType.class),
+            Mockito.any(HttpOperation.class));
+    Mockito.doNothing()
+        .when(abfsThrottlingIntercept)
+        .sendingRequest(Mockito.any(AbfsRestOperationType.class),
+            Mockito.nullable(AbfsCounters.class));
+    Mockito.doReturn(abfsThrottlingIntercept).when(client).getIntercept();
+
+
+    AbfsRestOperation op = Mockito.spy(new AbfsRestOperation(
+        AbfsRestOperationType.ReadFile,
+        client,
+        AbfsHttpConstants.HTTP_METHOD_GET,
+        new URL("http://localhost"),
+        new ArrayList<>(),
+        null,
+        configuration,
+        "clientId"
+    ));
+
+    Mockito.doReturn(null).when(op).getClientLatency();
+
+    Mockito.doReturn(createApacheHttpOp())
+        .when(op)
+        .createAbfsHttpOperation();
+    Mockito.doReturn(createAhcHttpOp())
+        .when(op)
+        .createAbfsAHCHttpOperation();
+
+    Mockito.doAnswer(answer -> {
+      return answer.getArgument(0);
+    }).when(op).createNewTracingContext(Mockito.nullable(TracingContext.class));
+
+    Mockito.doNothing()
+        .when(op)
+        .signRequest(Mockito.any(HttpOperation.class), Mockito.anyInt());
+
+    Mockito.doAnswer(answer -> {
+      HttpOperation operation = Mockito.spy(
+          (HttpOperation) answer.callRealMethod());
+      Assertions.assertThat(operation).isInstanceOf(
+          retryIteration[0]
+              < DEFAULT_APACHE_HTTP_CLIENT_MAX_IO_EXCEPTION_RETRIES
+              ? AbfsAHCHttpOperation.class
+              : AbfsHttpOperation.class);
+      Mockito.doReturn(HTTP_OK).when(operation).getStatusCode();
+      Mockito.doThrow(new IOException("Test Exception"))
+          .when(operation)
+          .processResponse(Mockito.nullable(byte[].class), Mockito.anyInt(),
+              Mockito.anyInt());
+      Mockito.doCallRealMethod().when(operation).getTracingContextSuffix();
+      return operation;
+    }).when(op).createHttpOperation();
+    return op;
+  }
+
+  private AbfsAHCHttpOperation createAhcHttpOp() {
+    AbfsAHCHttpOperation ahcOp = Mockito.mock(AbfsAHCHttpOperation.class);
+    Mockito.doCallRealMethod().when(ahcOp).getTracingContextSuffix();
+    return ahcOp;
+  }
+
+  private AbfsHttpOperation createApacheHttpOp() {
+    AbfsHttpOperation httpOperationMock = Mockito.mock(AbfsHttpOperation.class);
+    Mockito.doCallRealMethod()
+        .when(httpOperationMock)
+        .getTracingContextSuffix();
+    return httpOperationMock;
+  }
+
+  @Test
+  public void testTcHeaderOnJDKClientUse() {
+    TracingContext tc = getSampleTracingContext();
+    AbfsHttpOperation op = Mockito.mock(AbfsHttpOperation.class);
+    Mockito.doCallRealMethod().when(op).getTracingContextSuffix();
+    tc.constructHeader(op, null, null);
+  }

Review Comment:
   There was only one request that was made after fallback. Have increased the number of request post fallback and asserting in them if they are JDK_FALLBACK.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1549231346


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##########
@@ -842,6 +847,17 @@ public DelegatingSSLSocketFactory.SSLChannelMode getPreferredSSLFactoryOption()
     return getEnum(FS_AZURE_SSL_CHANNEL_MODE_KEY, DEFAULT_FS_AZURE_SSL_CHANNEL_MODE);
   }
 
+  /**
+   * @return Config to select netlib for server communication.
+   */
+  public HttpOperationType getPreferredHttpOperationType() {
+    return getEnum(FS_AZURE_NETWORKING_LIBRARY, DEFAULT_NETWORKING_LIBRARY);

Review Comment:
   Since, its more desirable client, Apache shall be the default. However, there are fallback mecahnism in place, which would make the process fallback to JDK, in case the new library is facing some issue. Current mechanism is :
   any request on abfsRestOperation which fails more than 3 times (configurable), it will make the whole process fallback to JDK.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2011239342

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |  11m 52s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 20 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m  6s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 26s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/15/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 98 new + 18 unchanged - 0 fixed = 116 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 24s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/15/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javadoc  |   0m 25s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/15/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  spotbugs  |   1m  8s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/15/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  33m 32s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 25s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 139m 36s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Unused field:AbfsManagedHttpContext.java |
   |  |  Write to static field org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:[line 75] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$ClientVector.put(HttpClientConnection) might ignore java.io.IOException  At KeepAliveCache.java:At KeepAliveCache.java:[line 215] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$ClientVector doesn't override java.util.Vector.equals(Object)  At KeepAliveCache.java:At KeepAliveCache.java:[line 1] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/15/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 2ef97e5de13b 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1566b24c4eb98d71ed18d61f6ed2328e31805f83 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/15/testReport/ |
   | Max. process+thread count | 555 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/15/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1535118935


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##########
@@ -363,6 +364,10 @@ public class AbfsConfiguration{
       FS_AZURE_ABFS_ENABLE_CHECKSUM_VALIDATION, DefaultValue = DEFAULT_ENABLE_ABFS_CHECKSUM_VALIDATION)
   private boolean isChecksumValidationEnabled;
 
+  @IntegerConfigurationValidatorAnnotation(ConfigurationKey =
+      FS_AZURE_APACHE_HTTP_CLIENT_MAX_IO_EXCEPTION_RETRIES, DefaultValue = DEFAULT_APACHE_HTTP_CLIENT_MAX_IO_EXCEPTION_RETRIES)
+  private int maxApacheHttpClientIoExceptions;

Review Comment:
   Variable name is not highlighting the max configured retries



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2003592357

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 18 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m  5s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 48s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  38m 10s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/3/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 136 new + 18 unchanged - 0 fixed = 154 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 27s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/3/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15)  |
   | -1 :x: |  javadoc  |   0m 31s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/3/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) |  hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08 with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15)  |
   | -1 :x: |  spotbugs  |   1m 13s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/3/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 18 new + 0 unchanged - 0 fixed = 18 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  38m 59s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 28s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 142m 21s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Unread field:AbfsConnectionManager.java:[line 113] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 63] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 88] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 68] |
   |  |  Unused field:AbfsApacheHttpClient.java |
   |  |  Unused field:AbfsApacheHttpClient.java |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.isResponseAvailable(int)  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.isResponseAvailable(int)  At AbfsConnFactory.java:[line 92] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.receiveResponseHeader()  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.receiveResponseHeader()  At AbfsConnFactory.java:[line 113] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsConnFactory.java:[line 100] |
   |  |  Dead store to startTime in org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:[line 337] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE isn't final and can't be protected from malicious code  At KeepAliveCache.java:be protected from malicious code  At KeepAliveCache.java:[line 71] |
   |  |  Exception is caught when Exception is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:[line 131] |
   |  |  Class org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache defines non-transient non-serializable instance field thread  In KeepAliveCache.java:instance field thread  In KeepAliveCache.java |
   |  |  Write to static field org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:[line 47] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup() makes inefficient use of keySet iterator instead of entrySet iterator  At KeepAliveCache.java:keySet iterator instead of entrySet iterator  At KeepAliveCache.java:[line 106] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$ClientVector doesn't override java.util.Vector.equals(Object)  At KeepAliveCache.java:At KeepAliveCache.java:[line 1] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveEntry be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 247-250] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveKey be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 220-239] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/3/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 58545984ca11 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 00a2e5d428f8f3b9f008b71146e5bbe79ccda1b8 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/3/testReport/ |
   | Max. process+thread count | 706 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2006966559

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 18 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 23s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 18s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 39s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/7/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 136 new + 18 unchanged - 0 fixed = 154 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 26s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/7/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15)  |
   | -1 :x: |  javadoc  |   0m 27s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/7/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15)  |
   | -1 :x: |  spotbugs  |   1m  7s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/7/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 18 new + 0 unchanged - 0 fixed = 18 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  33m 18s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 128m  7s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Unread field:AbfsConnectionManager.java:[line 113] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 63] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 88] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 68] |
   |  |  Unused field:AbfsApacheHttpClient.java |
   |  |  Unused field:AbfsApacheHttpClient.java |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.isResponseAvailable(int)  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.isResponseAvailable(int)  At AbfsConnFactory.java:[line 92] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.receiveResponseHeader()  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.receiveResponseHeader()  At AbfsConnFactory.java:[line 113] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsConnFactory.java:[line 100] |
   |  |  Dead store to startTime in org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:[line 337] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE isn't final and can't be protected from malicious code  At KeepAliveCache.java:be protected from malicious code  At KeepAliveCache.java:[line 71] |
   |  |  Exception is caught when Exception is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:[line 131] |
   |  |  Class org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache defines non-transient non-serializable instance field thread  In KeepAliveCache.java:instance field thread  In KeepAliveCache.java |
   |  |  Write to static field org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:[line 47] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup() makes inefficient use of keySet iterator instead of entrySet iterator  At KeepAliveCache.java:keySet iterator instead of entrySet iterator  At KeepAliveCache.java:[line 106] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$ClientVector doesn't override java.util.Vector.equals(Object)  At KeepAliveCache.java:At KeepAliveCache.java:[line 1] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveEntry be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 247-250] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveKey be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 220-239] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/7/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 0590ec9e850e 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 53348ffd5b1f3102e9dda16b2f5fa2fda64b82b8 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/7/testReport/ |
   | Max. process+thread count | 763 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/7/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1535503329


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/HttpOperation.java:
##########
@@ -0,0 +1,510 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.List;
+import java.util.Map;
+
+import com.fasterxml.jackson.core.JsonFactory;
+import com.fasterxml.jackson.core.JsonParser;
+import com.fasterxml.jackson.core.JsonToken;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.slf4j.Logger;
+
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.services.AbfsPerfLoggable;
+import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema;
+import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
+
+/**
+ * Base Http operation class for orchestrating server IO calls. Child classes would
+ * define the certain orchestration implementation on the basis of network library used.
+ * <p>
+ * For JDK netlib usage, the child class would be {@link AbfsHttpOperation}. <br>
+ * For ApacheHttpClient netlib usage, the child class would be {@link AbfsAHCHttpOperation}.
+ * </p>
+ */
+public abstract class HttpOperation implements AbfsPerfLoggable {
+
+  private final Logger log;
+
+  private static final int CLEAN_UP_BUFFER_SIZE = 64 * 1024;
+
+  private static final int ONE_THOUSAND = 1000;
+
+  private static final int ONE_MILLION = ONE_THOUSAND * ONE_THOUSAND;
+
+  private String method;
+
+  private URL url;
+
+  private String maskedUrl;
+
+  private String maskedEncodedUrl;
+
+  private int statusCode;
+
+  private String statusDescription;
+
+  private String storageErrorCode = "";
+
+  private String storageErrorMessage = "";
+
+  private String requestId = "";
+
+  private String expectedAppendPos = "";
+
+  private ListResultSchema listResultSchema = null;
+
+  // metrics
+  private int bytesSent;
+
+  private int expectedBytesToBeSent;
+
+  private long bytesReceived;
+
+  private long connectionTimeMs;
+
+  private long sendRequestTimeMs;
+
+  private long recvResponseTimeMs;
+
+  private boolean shouldMask = false;
+
+  public HttpOperation(Logger logger,
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    this.log = logger;
+    this.url = url;
+    this.method = method;
+    this.statusCode = httpStatus;
+  }
+
+  public HttpOperation(final Logger log, final URL url, final String method) {
+    this.log = log;
+    this.url = url;
+    this.method = method;
+  }
+
+  public String getMethod() {
+    return method;
+  }
+
+  public String getHost() {
+    return url.getHost();
+  }
+
+  public int getStatusCode() {
+    return statusCode;
+  }
+
+  public String getStatusDescription() {
+    return statusDescription;
+  }
+
+  public String getStorageErrorCode() {
+    return storageErrorCode;
+  }
+
+  public String getStorageErrorMessage() {
+    return storageErrorMessage;
+  }
+
+  public abstract String getClientRequestId();
+
+  public String getExpectedAppendPos() {
+    return expectedAppendPos;
+  }
+
+  public String getRequestId() {
+    return requestId;
+  }
+
+  public void setMaskForSAS() {
+    shouldMask = true;
+  }
+
+  public int getBytesSent() {
+    return bytesSent;
+  }
+
+  public int getExpectedBytesToBeSent() {
+    return expectedBytesToBeSent;
+  }
+
+  public long getBytesReceived() {
+    return bytesReceived;
+  }
+
+  public URL getUrl() {
+    return url;
+  }
+
+  public ListResultSchema getListResultSchema() {
+    return listResultSchema;
+  }
+
+  public abstract String getResponseHeader(String httpHeader);
+
+  void setExpectedBytesToBeSent(int expectedBytesToBeSent) {
+    this.expectedBytesToBeSent = expectedBytesToBeSent;
+  }
+
+  void setStatusCode(int statusCode) {
+    this.statusCode = statusCode;
+  }
+
+  void setStatusDescription(String statusDescription) {
+    this.statusDescription = statusDescription;
+  }
+
+  void setBytesSent(int bytesSent) {
+    this.bytesSent = bytesSent;
+  }
+
+  void setSendRequestTimeMs(long sendRequestTimeMs) {
+    this.sendRequestTimeMs = sendRequestTimeMs;
+  }
+
+  void setRecvResponseTimeMs(long recvResponseTimeMs) {
+    this.recvResponseTimeMs = recvResponseTimeMs;
+  }
+
+  void setRequestId(String requestId) {
+    this.requestId = requestId;
+  }
+
+  void setConnectionTimeMs(long connectionTimeMs) {
+    this.connectionTimeMs = connectionTimeMs;
+  }
+
+  // Returns a trace message for the request
+  @Override
+  public String toString() {
+    final StringBuilder sb = new StringBuilder();
+    sb.append(statusCode);
+    sb.append(",");
+    sb.append(storageErrorCode);
+    sb.append(",");
+    sb.append(expectedAppendPos);
+    sb.append(",cid=");
+    sb.append(getClientRequestId());
+    sb.append(",rid=");
+    sb.append(requestId);
+    sb.append(",connMs=");
+    sb.append(connectionTimeMs);
+    sb.append(",sendMs=");
+    sb.append(sendRequestTimeMs);
+    sb.append(",recvMs=");
+    sb.append(recvResponseTimeMs);
+    sb.append(",sent=");
+    sb.append(bytesSent);
+    sb.append(",recv=");
+    sb.append(bytesReceived);
+    sb.append(",");
+    sb.append(method);
+    sb.append(",");
+    sb.append(getMaskedUrl());
+    return sb.toString();
+  }
+
+  // Returns a trace message for the ABFS API logging service to consume
+  public String getLogString() {
+
+    final StringBuilder sb = new StringBuilder();
+    sb.append("s=")
+        .append(statusCode)
+        .append(" e=")
+        .append(storageErrorCode)
+        .append(" ci=")
+        .append(getClientRequestId())
+        .append(" ri=")
+        .append(requestId)
+
+        .append(" ct=")
+        .append(connectionTimeMs)
+        .append(" st=")
+        .append(sendRequestTimeMs)
+        .append(" rt=")
+        .append(recvResponseTimeMs)
+
+        .append(" bs=")
+        .append(bytesSent)
+        .append(" br=")
+        .append(bytesReceived)
+        .append(" m=")
+        .append(method)
+        .append(" u=")
+        .append(getMaskedEncodedUrl());
+
+    return sb.toString();
+  }
+
+  public String getMaskedUrl() {
+    if (!shouldMask) {
+      return url.toString();
+    }
+    if (maskedUrl != null) {
+      return maskedUrl;
+    }
+    maskedUrl = UriUtils.getMaskedUrl(url);
+    return maskedUrl;
+  }
+
+  public String getMaskedEncodedUrl() {
+    if (maskedEncodedUrl != null) {
+      return maskedEncodedUrl;
+    }
+    maskedEncodedUrl = UriUtils.encodedUrlStr(getMaskedUrl());
+    return maskedEncodedUrl;
+  }
+
+  public abstract void sendPayload(byte[] buffer, int offset, int length) throws
+      IOException;
+
+  public abstract void processResponse(byte[] buffer,
+      int offset,
+      int length) throws IOException;
+
+  public abstract void setRequestProperty(String key, String value);
+
+  void parseResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    long startTime;
+    if (AbfsHttpConstants.HTTP_METHOD_HEAD.equals(this.method)) {
+      // If it is HEAD, and it is ERROR
+      return;
+    }
+
+    startTime = System.nanoTime();
+
+    if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) {
+      processStorageErrorResponse();
+      this.recvResponseTimeMs += elapsedTimeMs(startTime);
+      String contentLength = getResponseHeader(
+          HttpHeaderConfigurations.CONTENT_LENGTH);
+      if (contentLength != null) {
+        this.bytesReceived = Long.parseLong(contentLength);
+      } else {
+        this.bytesReceived = 0L;
+      }
+
+    } else {
+      // consume the input stream to release resources
+      int totalBytesRead = 0;
+
+      try (InputStream stream = getContentInputStream()) {
+        if (isNullInputStream(stream)) {
+          return;
+        }
+        boolean endOfStream = false;
+
+        // this is a list operation and need to retrieve the data
+        // need a better solution
+        if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method)
+            && buffer == null) {
+          parseListFilesResponse(stream);
+        } else {
+          if (buffer != null) {
+            while (totalBytesRead < length) {
+              int bytesRead = stream.read(buffer, offset + totalBytesRead,
+                  length
+                      - totalBytesRead);
+              if (bytesRead == -1) {
+                endOfStream = true;
+                break;
+              }
+              totalBytesRead += bytesRead;
+            }
+          }
+          if (!endOfStream && stream.read() != -1) {
+            // read and discard
+            int bytesRead = 0;
+            byte[] b = new byte[CLEAN_UP_BUFFER_SIZE];
+            while ((bytesRead = stream.read(b)) >= 0) {
+              totalBytesRead += bytesRead;
+            }
+          }
+        }
+      } catch (IOException ex) {
+        log.warn("IO/Network error: {} {}: {}",
+            method, getMaskedUrl(), ex.getMessage());
+        log.debug("IO Error: ", ex);

Review Comment:
   not identifiable is thrown from which instance, should include some identifier



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1542777422


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/kac/KeepAliveCache.java:
##########
@@ -0,0 +1,317 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services.kac;
+
+import java.io.IOException;
+import java.io.NotSerializableException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.http.HttpClientConnection;
+import org.apache.http.conn.routing.HttpRoute;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_MAX_CONN_SYS_PROP;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_MAX_CONN_SYS_PROP;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.KAC_CONN_TTL;
+
+/**
+ * Connection-pooling heuristics adapted from JDK's connection pooling `KeepAliveCache`
+ * <p>
+ * Why this implementation is required in comparison to {@link org.apache.http.impl.conn.PoolingHttpClientConnectionManager}
+ * connection-pooling:
+ * <ol>
+ * <li>PoolingHttpClientConnectionManager heuristic caches all the reusable connections it has created.
+ * JDK's implementation only caches limited number of connections. The limit is given by JVM system
+ * property "http.maxConnections". If there is no system-property, it defaults to 5.</li>
+ * <li>In PoolingHttpClientConnectionManager, it expects the application to provide `setMaxPerRoute` and `setMaxTotal`,
+ * which the implementation uses as the total number of connections it can create. For application using ABFS, it is not
+ * feasible to provide a value in the initialisation of the connectionManager. JDK's implementation has no cap on the
+ * number of connections it can create.</li>
+ * </ol>
+ */
+public final class KeepAliveCache
+    extends HashMap<KeepAliveCache.KeepAliveKey, KeepAliveCache.ClientVector>
+    implements Runnable {
+
+  private boolean threadShouldPause = true;
+
+  private boolean threadShouldRun = true;
+
+  private int maxConn;
+
+  private KeepAliveCache() {
+    Thread thread = new Thread(this);
+    thread.start();
+    setMaxConn();
+  }
+
+  private void setMaxConn() {
+    String sysPropMaxConn = System.getProperty(HTTP_MAX_CONN_SYS_PROP);
+    if (sysPropMaxConn == null) {
+      maxConn = DEFAULT_MAX_CONN_SYS_PROP;
+    } else {
+      maxConn = Integer.parseInt(sysPropMaxConn);
+    }
+  }
+
+  private static final KeepAliveCache INSTANCE = new KeepAliveCache();
+
+  @VisibleForTesting
+  void close() {
+    clear();
+    setMaxConn();
+  }
+
+  public static KeepAliveCache getInstance() {
+    return INSTANCE;
+  }
+
+  @VisibleForTesting
+  void pauseThread() {
+    threadShouldPause = false;
+  }
+
+  @VisibleForTesting
+  void resumeThread() {
+    threadShouldPause = true;
+  }
+
+  private int getKacSize() {
+    return INSTANCE.maxConn;
+  }
+
+  @Override
+  public void run() {
+    while (threadShouldRun) {

Review Comment:
   can be changed to while(true) as see that this variable is never changed to false



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1542935703


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/kac/package-info.java:
##########
@@ -0,0 +1,22 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Evolving
+package org.apache.hadoop.fs.azurebfs.services.kac;

Review Comment:
   is this class needed ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1544216052


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {

Review Comment:
   Similar thing happens in AbfsHttpOperation on trunk: https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java#L400-L404



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2026793314

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 20s |  |  https://github.com/apache/hadoop/pull/6633 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/24/console |
   | versions | git=2.34.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1544232931


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {
+    abfsHttpClientContext = setFinalAbfsClientContext();
+    HttpResponse response = abfsApacheHttpClient.execute(httpRequestBase,
+        abfsHttpClientContext);
+    setConnectionTimeMs(abfsHttpClientContext.getConnectTime());
+    setSendRequestTimeMs(abfsHttpClientContext.getSendTime());
+    setRecvResponseTimeMs(abfsHttpClientContext.getReadTime());
+    return response;
+  }
+
+  private Map<String, List<String>> getResponseHeaders(final HttpResponse httpResponse) {
+    if (httpResponse == null || httpResponse.getAllHeaders() == null) {
+      return new HashMap<>();
+    }
+    Map<String, List<String>> map = new HashMap<>();
+    for (Header header : httpResponse.getAllHeaders()) {
+      map.put(header.getName(), new ArrayList<String>(
+          Collections.singleton(header.getValue())));
+    }
+    return map;
+  }
+
+  @Override
+  public void setRequestProperty(final String key, final String value) {
+    setHeader(key, value);
+  }
+
+  @Override
+  Map<String, List<String>> getRequestProperties() {
+    Map<String, List<String>> map = new HashMap<>();
+    for (AbfsHttpHeader header : requestHeaders) {
+      map.put(header.getName(),
+          new ArrayList<String>() {{
+            add(header.getValue());
+          }});
+    }
+    return map;
+  }
+
+  @Override
+  public String getResponseHeader(final String headerName) {
+    if (httpResponse == null) {
+      return null;
+    }
+    Header header = httpResponse.getFirstHeader(headerName);
+    if (header != null) {
+      return header.getValue();
+    }
+    return null;
+  }
+
+  @Override
+  InputStream getContentInputStream()
+      throws IOException {
+    if (httpResponse == null) {
+      return null;
+    }
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity != null) {
+      return httpResponse.getEntity().getContent();
+    }
+    return null;
+  }
+
+  public void sendPayload(final byte[] buffer,
+      final int offset,
+      final int length)
+      throws IOException {
+    if (!isPayloadRequest) {
+      return;
+    }
+
+    if (HTTP_METHOD_PUT.equals(getMethod())) {
+      httpRequestBase = new HttpPut(getUri());
+    }
+    if (HTTP_METHOD_PATCH.equals(getMethod())) {
+      httpRequestBase = new HttpPatch(getUri());
+    }
+    if (HTTP_METHOD_POST.equals(getMethod())) {
+      httpRequestBase = new HttpPost(getUri());
+    }
+
+    setExpectedBytesToBeSent(length);
+    if (buffer != null) {
+      HttpEntity httpEntity = new ByteArrayEntity(buffer, offset, length,
+          TEXT_PLAIN);
+      ((HttpEntityEnclosingRequestBase) httpRequestBase).setEntity(
+          httpEntity);
+    }
+
+    translateHeaders(httpRequestBase, requestHeaders);
+    try {
+      httpResponse = executeRequest();
+    } catch (AbfsApacheHttpExpect100Exception ex) {
+      LOG.debug(

Review Comment:
   Exception logging taken from AbfsHttpOperation's expect100 handling.
   Have added in the log, the url of request, and the status with which expect100 assertion is failed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2029193479

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 22 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 24s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/artifact/out/branch-mvninstall-root.txt) |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 23s | [/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in trunk failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  compile  |   0m 11s | [/branch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in trunk failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/buildtool-branch-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/artifact/out/buildtool-branch-checkstyle-hadoop-tools_hadoop-azure.txt) |  The patch fails to run checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 23s | [/branch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/artifact/out/branch-mvnsite-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 24s | [/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in trunk failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javadoc  |   0m 23s | [/branch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in trunk failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  spotbugs  |   0m 23s | [/branch-spotbugs-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/artifact/out/branch-spotbugs-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in trunk failed.  |
   | -1 :x: |  shadedclient  |   3m  1s |  |  branch has errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |   3m 25s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 23s | [/patch-mvninstall-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 23s | [/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javac  |   0m 23s | [/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  compile  |   0m 23s | [/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  javac  |   0m 23s | [/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/buildtool-patch-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/artifact/out/buildtool-patch-checkstyle-hadoop-tools_hadoop-azure.txt) |  The patch fails to run checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 23s | [/patch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | -1 :x: |  javadoc  |   0m 23s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javadoc  |   0m 24s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  spotbugs  |   0m 24s | [/patch-spotbugs-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/artifact/out/patch-spotbugs-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  shadedclient  |   4m 16s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 23s | [/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 27s |  |  ASF License check generated no output?  |
   |  |   |  16m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 32a6d9183df7 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3b3a5cfcaf0b8abe70e445c9dbf73c17df01fe6c |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/testReport/ |
   | Max. process+thread count | 51 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/29/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2033747862

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 21 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m  3s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 42s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m  3s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/42/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 11 new + 18 unchanged - 0 fixed = 29 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m 24s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 129m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/42/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9d601761aec1 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ab9237dff26350968b66c9067073a47089949cc1 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/42/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/42/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1548854300


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsApacheHttpClient.java:
##########
@@ -0,0 +1,93 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.config.RequestConfig;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.config.Registry;
+import org.apache.http.config.RegistryBuilder;
+import org.apache.http.conn.socket.ConnectionSocketFactory;
+import org.apache.http.conn.socket.PlainConnectionSocketFactory;
+import org.apache.http.conn.ssl.SSLConnectionSocketFactory;
+import org.apache.http.impl.client.CloseableHttpClient;
+import org.apache.http.impl.client.HttpClientBuilder;
+import org.apache.http.impl.client.HttpClients;
+
+import static org.apache.http.conn.ssl.SSLConnectionSocketFactory.getDefaultHostnameVerifier;
+
+public class AbfsApacheHttpClient {
+  private final CloseableHttpClient httpClient;
+
+  private final AbfsConfiguration abfsConfiguration;
+
+  public AbfsApacheHttpClient(DelegatingSSLSocketFactory delegatingSSLSocketFactory,
+      final AbfsConfiguration abfsConfiguration) {
+    this.abfsConfiguration = abfsConfiguration;
+    final AbfsConnectionManager connMgr = new AbfsConnectionManager(
+        createSocketFactoryRegistry(
+            new SSLConnectionSocketFactory(delegatingSSLSocketFactory,
+                getDefaultHostnameVerifier())),
+        new org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory());
+    final HttpClientBuilder builder = HttpClients.custom();
+    builder.setConnectionManager(connMgr)
+        .setRequestExecutor(new AbfsManagedHttpRequestExecutor(
+            abfsConfiguration.getHttpReadTimeout()))
+        .disableContentCompression()
+        .disableRedirectHandling()
+        .disableAutomaticRetries()
+        .setUserAgent(
+            ""); // SDK will set the user agent header in the pipeline. Don't let Apache waste time
+    httpClient = builder.build();
+  }
+
+  public void close() throws IOException {
+    if (httpClient != null) {
+      httpClient.close();
+    }
+  }
+
+  public HttpResponse execute(HttpRequestBase httpRequest,
+      final AbfsManagedHttpContext abfsHttpClientContext) throws IOException {
+    RequestConfig.Builder requestConfigBuilder = RequestConfig
+        .custom()
+        .setConnectTimeout(abfsConfiguration.getHttpConnectionTimeout())
+        .setSocketTimeout(abfsConfiguration.getHttpReadTimeout());
+    httpRequest.setConfig(requestConfigBuilder.build());
+    return httpClient.execute(httpRequest, abfsHttpClientContext);
+  }
+
+
+  private static Registry<ConnectionSocketFactory> createSocketFactoryRegistry(
+      ConnectionSocketFactory sslSocketFactory) {
+    if (sslSocketFactory == null) {
+      return RegistryBuilder.<ConnectionSocketFactory>create()
+          .register("http", PlainConnectionSocketFactory.getSocketFactory())

Review Comment:
   taken.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2029670723

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 22 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 13s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m 34s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/33/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 9 new + 18 unchanged - 0 fixed = 27 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m  7s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/33/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  33m 38s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 27s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 129m  5s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Class org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache defines non-transient non-serializable instance field keepAliveTimer  In KeepAliveCache.java:instance field keepAliveTimer  In KeepAliveCache.java |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/33/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8ad88f2e1e99 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 63113d9ad29917b7b5086a754227e6b0e79612c0 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/33/testReport/ |
   | Max. process+thread count | 551 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/33/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2005995840

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 18 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 57s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 25s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 46s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/5/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 140 new + 18 unchanged - 0 fixed = 158 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 27s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/5/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15)  |
   | -1 :x: |  javadoc  |   0m 26s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/5/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15)  |
   | -1 :x: |  spotbugs  |   1m  8s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/5/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 18 new + 0 unchanged - 0 fixed = 18 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  33m 20s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 25s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 127m 44s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Unread field:AbfsConnectionManager.java:[line 113] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 63] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 88] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 68] |
   |  |  Unused field:AbfsApacheHttpClient.java |
   |  |  Unused field:AbfsApacheHttpClient.java |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.isResponseAvailable(int)  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.isResponseAvailable(int)  At AbfsConnFactory.java:[line 92] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.receiveResponseHeader()  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.receiveResponseHeader()  At AbfsConnFactory.java:[line 113] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsConnFactory.java:[line 100] |
   |  |  Dead store to startTime in org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:[line 337] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE isn't final and can't be protected from malicious code  At KeepAliveCache.java:be protected from malicious code  At KeepAliveCache.java:[line 71] |
   |  |  Exception is caught when Exception is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:[line 131] |
   |  |  Class org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache defines non-transient non-serializable instance field thread  In KeepAliveCache.java:instance field thread  In KeepAliveCache.java |
   |  |  Write to static field org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:[line 47] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup() makes inefficient use of keySet iterator instead of entrySet iterator  At KeepAliveCache.java:keySet iterator instead of entrySet iterator  At KeepAliveCache.java:[line 106] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$ClientVector doesn't override java.util.Vector.equals(Object)  At KeepAliveCache.java:At KeepAliveCache.java:[line 1] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveEntry be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 247-250] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveKey be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 220-239] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/5/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 3fdbaf8cede5 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / eb32140ec90a2bdf11a7fd4fe21c84e393392535 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/5/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2007163057

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |  18m 34s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 18 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 17s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  32m 54s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 15s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/8/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 137 new + 18 unchanged - 0 fixed = 155 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 25s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/8/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15)  |
   | -1 :x: |  javadoc  |   0m 26s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/8/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15)  |
   | -1 :x: |  spotbugs  |   1m  7s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/8/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 18 new + 0 unchanged - 0 fixed = 18 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  33m  3s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 32s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 145m 14s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Unread field:AbfsConnectionManager.java:[line 113] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 63] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 88] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 68] |
   |  |  Unused field:AbfsApacheHttpClient.java |
   |  |  Unused field:AbfsApacheHttpClient.java |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.isResponseAvailable(int)  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.isResponseAvailable(int)  At AbfsConnFactory.java:[line 92] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.receiveResponseHeader()  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.receiveResponseHeader()  At AbfsConnFactory.java:[line 113] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsConnFactory.java:[line 100] |
   |  |  Dead store to startTime in org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:[line 337] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE isn't final and can't be protected from malicious code  At KeepAliveCache.java:be protected from malicious code  At KeepAliveCache.java:[line 71] |
   |  |  Exception is caught when Exception is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:[line 131] |
   |  |  Class org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache defines non-transient non-serializable instance field thread  In KeepAliveCache.java:instance field thread  In KeepAliveCache.java |
   |  |  Write to static field org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:[line 47] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup() makes inefficient use of keySet iterator instead of entrySet iterator  At KeepAliveCache.java:keySet iterator instead of entrySet iterator  At KeepAliveCache.java:[line 106] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$ClientVector doesn't override java.util.Vector.equals(Object)  At KeepAliveCache.java:At KeepAliveCache.java:[line 1] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveEntry be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 247-250] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveKey be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 220-239] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/8/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux e0b4e081a637 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9fddf506d8fbf6d8d760d785ea3feb532049b7d5 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/8/testReport/ |
   | Max. process+thread count | 615 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/8/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1540890998


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {
+    abfsHttpClientContext = setFinalAbfsClientContext();
+    HttpResponse response = abfsApacheHttpClient.execute(httpRequestBase,
+        abfsHttpClientContext);
+    setConnectionTimeMs(abfsHttpClientContext.getConnectTime());
+    setSendRequestTimeMs(abfsHttpClientContext.getSendTime());
+    setRecvResponseTimeMs(abfsHttpClientContext.getReadTime());
+    return response;
+  }
+
+  private Map<String, List<String>> getResponseHeaders(final HttpResponse httpResponse) {
+    if (httpResponse == null || httpResponse.getAllHeaders() == null) {
+      return new HashMap<>();
+    }
+    Map<String, List<String>> map = new HashMap<>();
+    for (Header header : httpResponse.getAllHeaders()) {
+      map.put(header.getName(), new ArrayList<String>(
+          Collections.singleton(header.getValue())));
+    }
+    return map;
+  }
+
+  @Override
+  public void setRequestProperty(final String key, final String value) {
+    setHeader(key, value);
+  }
+
+  @Override
+  Map<String, List<String>> getRequestProperties() {
+    Map<String, List<String>> map = new HashMap<>();
+    for (AbfsHttpHeader header : requestHeaders) {
+      map.put(header.getName(),
+          new ArrayList<String>() {{
+            add(header.getValue());
+          }});
+    }
+    return map;
+  }
+
+  @Override
+  public String getResponseHeader(final String headerName) {
+    if (httpResponse == null) {
+      return null;
+    }
+    Header header = httpResponse.getFirstHeader(headerName);
+    if (header != null) {
+      return header.getValue();
+    }
+    return null;
+  }
+
+  @Override
+  InputStream getContentInputStream()
+      throws IOException {
+    if (httpResponse == null) {
+      return null;
+    }
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity != null) {
+      return httpResponse.getEntity().getContent();
+    }
+    return null;
+  }
+
+  public void sendPayload(final byte[] buffer,
+      final int offset,
+      final int length)
+      throws IOException {
+    if (!isPayloadRequest) {
+      return;
+    }
+
+    if (HTTP_METHOD_PUT.equals(getMethod())) {
+      httpRequestBase = new HttpPut(getUri());

Review Comment:
   this can also be put in prepareRequest method as part of switch case 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2027017220

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 22 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m  1s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m 23s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/25/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 3 new + 18 unchanged - 0 fixed = 21 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m  6s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/25/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  34m 12s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 28s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 130m  5s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Possible doublecheck on org.apache.hadoop.fs.azurebfs.services.AbfsAHCHttpOperation.ABFS_APACHE_HTTP_CLIENT in org.apache.hadoop.fs.azurebfs.services.AbfsAHCHttpOperation.setAbfsApacheHttpClient(AbfsConfiguration)  At AbfsAHCHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsAHCHttpOperation.setAbfsApacheHttpClient(AbfsConfiguration)  At AbfsAHCHttpOperation.java:[lines 124-126] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/25/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 311986ade75d 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2021185d6b9f61ce31e4554ee9e705f588666280 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/25/testReport/ |
   | Max. process+thread count | 705 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/25/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1547593679


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/HttpOperation.java:
##########
@@ -0,0 +1,510 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.List;
+import java.util.Map;
+
+import com.fasterxml.jackson.core.JsonFactory;
+import com.fasterxml.jackson.core.JsonParser;
+import com.fasterxml.jackson.core.JsonToken;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.slf4j.Logger;
+
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.services.AbfsPerfLoggable;
+import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema;
+import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
+
+/**
+ * Base Http operation class for orchestrating server IO calls. Child classes would
+ * define the certain orchestration implementation on the basis of network library used.
+ * <p>
+ * For JDK netlib usage, the child class would be {@link AbfsHttpOperation}. <br>
+ * For ApacheHttpClient netlib usage, the child class would be {@link AbfsAHCHttpOperation}.
+ * </p>
+ */
+public abstract class HttpOperation implements AbfsPerfLoggable {
+
+  private final Logger log;
+
+  private static final int CLEAN_UP_BUFFER_SIZE = 64 * 1024;
+
+  private static final int ONE_THOUSAND = 1000;
+
+  private static final int ONE_MILLION = ONE_THOUSAND * ONE_THOUSAND;
+
+  private String method;
+
+  private URL url;
+
+  private String maskedUrl;
+
+  private String maskedEncodedUrl;
+
+  private int statusCode;
+
+  private String statusDescription;
+
+  private String storageErrorCode = "";
+
+  private String storageErrorMessage = "";
+
+  private String requestId = "";
+
+  private String expectedAppendPos = "";
+
+  private ListResultSchema listResultSchema = null;
+
+  // metrics
+  private int bytesSent;
+
+  private int expectedBytesToBeSent;
+
+  private long bytesReceived;
+
+  private long connectionTimeMs;
+
+  private long sendRequestTimeMs;
+
+  private long recvResponseTimeMs;
+
+  private boolean shouldMask = false;
+
+  public HttpOperation(Logger logger,
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    this.log = logger;
+    this.url = url;
+    this.method = method;
+    this.statusCode = httpStatus;
+  }
+
+  public HttpOperation(final Logger log, final URL url, final String method) {
+    this.log = log;
+    this.url = url;
+    this.method = method;
+  }
+
+  public String getMethod() {
+    return method;
+  }
+
+  public String getHost() {
+    return url.getHost();
+  }
+
+  public int getStatusCode() {
+    return statusCode;
+  }
+
+  public String getStatusDescription() {
+    return statusDescription;
+  }
+
+  public String getStorageErrorCode() {
+    return storageErrorCode;
+  }
+
+  public String getStorageErrorMessage() {
+    return storageErrorMessage;
+  }
+
+  public abstract String getClientRequestId();
+
+  public String getExpectedAppendPos() {
+    return expectedAppendPos;
+  }
+
+  public String getRequestId() {
+    return requestId;
+  }
+
+  public void setMaskForSAS() {
+    shouldMask = true;
+  }
+
+  public int getBytesSent() {
+    return bytesSent;
+  }
+
+  public int getExpectedBytesToBeSent() {
+    return expectedBytesToBeSent;
+  }
+
+  public long getBytesReceived() {
+    return bytesReceived;
+  }
+
+  public URL getUrl() {
+    return url;
+  }
+
+  public ListResultSchema getListResultSchema() {
+    return listResultSchema;
+  }
+
+  public abstract String getResponseHeader(String httpHeader);
+
+  void setExpectedBytesToBeSent(int expectedBytesToBeSent) {
+    this.expectedBytesToBeSent = expectedBytesToBeSent;
+  }
+
+  void setStatusCode(int statusCode) {
+    this.statusCode = statusCode;
+  }
+
+  void setStatusDescription(String statusDescription) {
+    this.statusDescription = statusDescription;
+  }
+
+  void setBytesSent(int bytesSent) {
+    this.bytesSent = bytesSent;
+  }
+
+  void setSendRequestTimeMs(long sendRequestTimeMs) {
+    this.sendRequestTimeMs = sendRequestTimeMs;
+  }
+
+  void setRecvResponseTimeMs(long recvResponseTimeMs) {
+    this.recvResponseTimeMs = recvResponseTimeMs;
+  }
+
+  void setRequestId(String requestId) {
+    this.requestId = requestId;
+  }
+
+  void setConnectionTimeMs(long connectionTimeMs) {
+    this.connectionTimeMs = connectionTimeMs;
+  }
+
+  // Returns a trace message for the request
+  @Override
+  public String toString() {
+    final StringBuilder sb = new StringBuilder();
+    sb.append(statusCode);
+    sb.append(",");
+    sb.append(storageErrorCode);
+    sb.append(",");
+    sb.append(expectedAppendPos);
+    sb.append(",cid=");
+    sb.append(getClientRequestId());
+    sb.append(",rid=");
+    sb.append(requestId);
+    sb.append(",connMs=");
+    sb.append(connectionTimeMs);
+    sb.append(",sendMs=");
+    sb.append(sendRequestTimeMs);
+    sb.append(",recvMs=");
+    sb.append(recvResponseTimeMs);
+    sb.append(",sent=");
+    sb.append(bytesSent);
+    sb.append(",recv=");
+    sb.append(bytesReceived);
+    sb.append(",");
+    sb.append(method);
+    sb.append(",");
+    sb.append(getMaskedUrl());
+    return sb.toString();
+  }
+
+  // Returns a trace message for the ABFS API logging service to consume
+  public String getLogString() {
+
+    final StringBuilder sb = new StringBuilder();
+    sb.append("s=")
+        .append(statusCode)
+        .append(" e=")
+        .append(storageErrorCode)
+        .append(" ci=")
+        .append(getClientRequestId())
+        .append(" ri=")
+        .append(requestId)
+
+        .append(" ct=")
+        .append(connectionTimeMs)
+        .append(" st=")
+        .append(sendRequestTimeMs)
+        .append(" rt=")
+        .append(recvResponseTimeMs)
+
+        .append(" bs=")
+        .append(bytesSent)
+        .append(" br=")
+        .append(bytesReceived)
+        .append(" m=")
+        .append(method)
+        .append(" u=")
+        .append(getMaskedEncodedUrl());
+
+    return sb.toString();
+  }
+
+  public String getMaskedUrl() {
+    if (!shouldMask) {
+      return url.toString();
+    }
+    if (maskedUrl != null) {
+      return maskedUrl;
+    }
+    maskedUrl = UriUtils.getMaskedUrl(url);
+    return maskedUrl;
+  }
+
+  public String getMaskedEncodedUrl() {
+    if (maskedEncodedUrl != null) {
+      return maskedEncodedUrl;
+    }
+    maskedEncodedUrl = UriUtils.encodedUrlStr(getMaskedUrl());
+    return maskedEncodedUrl;
+  }
+
+  public abstract void sendPayload(byte[] buffer, int offset, int length) throws
+      IOException;
+
+  public abstract void processResponse(byte[] buffer,
+      int offset,
+      int length) throws IOException;
+
+  public abstract void setRequestProperty(String key, String value);
+
+  void parseResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    long startTime;
+    if (AbfsHttpConstants.HTTP_METHOD_HEAD.equals(this.method)) {
+      // If it is HEAD, and it is ERROR
+      return;
+    }
+
+    startTime = System.nanoTime();
+
+    if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) {
+      processStorageErrorResponse();
+      this.recvResponseTimeMs += elapsedTimeMs(startTime);
+      String contentLength = getResponseHeader(
+          HttpHeaderConfigurations.CONTENT_LENGTH);
+      if (contentLength != null) {
+        this.bytesReceived = Long.parseLong(contentLength);
+      } else {
+        this.bytesReceived = 0L;
+      }
+
+    } else {
+      // consume the input stream to release resources
+      int totalBytesRead = 0;
+
+      try (InputStream stream = getContentInputStream()) {
+        if (isNullInputStream(stream)) {
+          return;
+        }
+        boolean endOfStream = false;
+
+        // this is a list operation and need to retrieve the data
+        // need a better solution
+        if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method)
+            && buffer == null) {
+          parseListFilesResponse(stream);
+        } else {
+          if (buffer != null) {
+            while (totalBytesRead < length) {
+              int bytesRead = stream.read(buffer, offset + totalBytesRead,
+                  length
+                      - totalBytesRead);
+              if (bytesRead == -1) {
+                endOfStream = true;
+                break;
+              }
+              totalBytesRead += bytesRead;
+            }
+          }
+          if (!endOfStream && stream.read() != -1) {
+            // read and discard
+            int bytesRead = 0;
+            byte[] b = new byte[CLEAN_UP_BUFFER_SIZE];
+            while ((bytesRead = stream.read(b)) >= 0) {
+              totalBytesRead += bytesRead;
+            }
+          }
+        }
+      } catch (IOException ex) {
+        log.warn("IO/Network error: {} {}: {}",
+            method, getMaskedUrl(), ex.getMessage());
+        log.debug("IO Error: ", ex);

Review Comment:
   Its the code of trunk `AbfsHttpOperation` : https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java#L409-L463
   



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/HttpOperation.java:
##########
@@ -0,0 +1,510 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.List;
+import java.util.Map;
+
+import com.fasterxml.jackson.core.JsonFactory;
+import com.fasterxml.jackson.core.JsonParser;
+import com.fasterxml.jackson.core.JsonToken;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.slf4j.Logger;
+
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.services.AbfsPerfLoggable;
+import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema;
+import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
+
+/**
+ * Base Http operation class for orchestrating server IO calls. Child classes would
+ * define the certain orchestration implementation on the basis of network library used.
+ * <p>
+ * For JDK netlib usage, the child class would be {@link AbfsHttpOperation}. <br>
+ * For ApacheHttpClient netlib usage, the child class would be {@link AbfsAHCHttpOperation}.
+ * </p>
+ */
+public abstract class HttpOperation implements AbfsPerfLoggable {
+
+  private final Logger log;
+
+  private static final int CLEAN_UP_BUFFER_SIZE = 64 * 1024;
+
+  private static final int ONE_THOUSAND = 1000;
+
+  private static final int ONE_MILLION = ONE_THOUSAND * ONE_THOUSAND;
+
+  private String method;
+
+  private URL url;
+
+  private String maskedUrl;
+
+  private String maskedEncodedUrl;
+
+  private int statusCode;
+
+  private String statusDescription;
+
+  private String storageErrorCode = "";
+
+  private String storageErrorMessage = "";
+
+  private String requestId = "";
+
+  private String expectedAppendPos = "";
+
+  private ListResultSchema listResultSchema = null;
+
+  // metrics
+  private int bytesSent;
+
+  private int expectedBytesToBeSent;
+
+  private long bytesReceived;
+
+  private long connectionTimeMs;
+
+  private long sendRequestTimeMs;
+
+  private long recvResponseTimeMs;
+
+  private boolean shouldMask = false;
+
+  public HttpOperation(Logger logger,
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    this.log = logger;
+    this.url = url;
+    this.method = method;
+    this.statusCode = httpStatus;
+  }
+
+  public HttpOperation(final Logger log, final URL url, final String method) {
+    this.log = log;
+    this.url = url;
+    this.method = method;
+  }
+
+  public String getMethod() {
+    return method;
+  }
+
+  public String getHost() {
+    return url.getHost();
+  }
+
+  public int getStatusCode() {
+    return statusCode;
+  }
+
+  public String getStatusDescription() {
+    return statusDescription;
+  }
+
+  public String getStorageErrorCode() {
+    return storageErrorCode;
+  }
+
+  public String getStorageErrorMessage() {
+    return storageErrorMessage;
+  }
+
+  public abstract String getClientRequestId();
+
+  public String getExpectedAppendPos() {
+    return expectedAppendPos;
+  }
+
+  public String getRequestId() {
+    return requestId;
+  }
+
+  public void setMaskForSAS() {
+    shouldMask = true;
+  }
+
+  public int getBytesSent() {
+    return bytesSent;
+  }
+
+  public int getExpectedBytesToBeSent() {
+    return expectedBytesToBeSent;
+  }
+
+  public long getBytesReceived() {
+    return bytesReceived;
+  }
+
+  public URL getUrl() {
+    return url;
+  }
+
+  public ListResultSchema getListResultSchema() {
+    return listResultSchema;
+  }
+
+  public abstract String getResponseHeader(String httpHeader);
+
+  void setExpectedBytesToBeSent(int expectedBytesToBeSent) {
+    this.expectedBytesToBeSent = expectedBytesToBeSent;
+  }
+
+  void setStatusCode(int statusCode) {
+    this.statusCode = statusCode;
+  }
+
+  void setStatusDescription(String statusDescription) {
+    this.statusDescription = statusDescription;
+  }
+
+  void setBytesSent(int bytesSent) {
+    this.bytesSent = bytesSent;
+  }
+
+  void setSendRequestTimeMs(long sendRequestTimeMs) {
+    this.sendRequestTimeMs = sendRequestTimeMs;
+  }
+
+  void setRecvResponseTimeMs(long recvResponseTimeMs) {
+    this.recvResponseTimeMs = recvResponseTimeMs;
+  }
+
+  void setRequestId(String requestId) {
+    this.requestId = requestId;
+  }
+
+  void setConnectionTimeMs(long connectionTimeMs) {
+    this.connectionTimeMs = connectionTimeMs;
+  }
+
+  // Returns a trace message for the request
+  @Override
+  public String toString() {
+    final StringBuilder sb = new StringBuilder();
+    sb.append(statusCode);
+    sb.append(",");
+    sb.append(storageErrorCode);
+    sb.append(",");
+    sb.append(expectedAppendPos);
+    sb.append(",cid=");
+    sb.append(getClientRequestId());
+    sb.append(",rid=");
+    sb.append(requestId);
+    sb.append(",connMs=");
+    sb.append(connectionTimeMs);
+    sb.append(",sendMs=");
+    sb.append(sendRequestTimeMs);
+    sb.append(",recvMs=");
+    sb.append(recvResponseTimeMs);
+    sb.append(",sent=");
+    sb.append(bytesSent);
+    sb.append(",recv=");
+    sb.append(bytesReceived);
+    sb.append(",");
+    sb.append(method);
+    sb.append(",");
+    sb.append(getMaskedUrl());
+    return sb.toString();
+  }
+
+  // Returns a trace message for the ABFS API logging service to consume
+  public String getLogString() {
+
+    final StringBuilder sb = new StringBuilder();
+    sb.append("s=")
+        .append(statusCode)
+        .append(" e=")
+        .append(storageErrorCode)
+        .append(" ci=")
+        .append(getClientRequestId())
+        .append(" ri=")
+        .append(requestId)
+
+        .append(" ct=")
+        .append(connectionTimeMs)
+        .append(" st=")
+        .append(sendRequestTimeMs)
+        .append(" rt=")
+        .append(recvResponseTimeMs)
+
+        .append(" bs=")
+        .append(bytesSent)
+        .append(" br=")
+        .append(bytesReceived)
+        .append(" m=")
+        .append(method)
+        .append(" u=")
+        .append(getMaskedEncodedUrl());
+
+    return sb.toString();
+  }
+
+  public String getMaskedUrl() {
+    if (!shouldMask) {
+      return url.toString();
+    }
+    if (maskedUrl != null) {
+      return maskedUrl;
+    }
+    maskedUrl = UriUtils.getMaskedUrl(url);
+    return maskedUrl;
+  }
+
+  public String getMaskedEncodedUrl() {
+    if (maskedEncodedUrl != null) {
+      return maskedEncodedUrl;
+    }
+    maskedEncodedUrl = UriUtils.encodedUrlStr(getMaskedUrl());
+    return maskedEncodedUrl;
+  }
+
+  public abstract void sendPayload(byte[] buffer, int offset, int length) throws
+      IOException;
+
+  public abstract void processResponse(byte[] buffer,
+      int offset,
+      int length) throws IOException;
+
+  public abstract void setRequestProperty(String key, String value);
+
+  void parseResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    long startTime;
+    if (AbfsHttpConstants.HTTP_METHOD_HEAD.equals(this.method)) {
+      // If it is HEAD, and it is ERROR
+      return;
+    }
+
+    startTime = System.nanoTime();
+
+    if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) {
+      processStorageErrorResponse();
+      this.recvResponseTimeMs += elapsedTimeMs(startTime);
+      String contentLength = getResponseHeader(
+          HttpHeaderConfigurations.CONTENT_LENGTH);
+      if (contentLength != null) {
+        this.bytesReceived = Long.parseLong(contentLength);
+      } else {
+        this.bytesReceived = 0L;
+      }
+
+    } else {
+      // consume the input stream to release resources
+      int totalBytesRead = 0;
+
+      try (InputStream stream = getContentInputStream()) {
+        if (isNullInputStream(stream)) {
+          return;
+        }
+        boolean endOfStream = false;
+
+        // this is a list operation and need to retrieve the data
+        // need a better solution
+        if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method)
+            && buffer == null) {
+          parseListFilesResponse(stream);
+        } else {
+          if (buffer != null) {
+            while (totalBytesRead < length) {
+              int bytesRead = stream.read(buffer, offset + totalBytesRead,
+                  length
+                      - totalBytesRead);
+              if (bytesRead == -1) {
+                endOfStream = true;
+                break;
+              }
+              totalBytesRead += bytesRead;
+            }
+          }
+          if (!endOfStream && stream.read() != -1) {

Review Comment:
   Its the code of trunk `AbfsHttpOperation` : https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java#L409-L463



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1548855987


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/HttpOperation.java:
##########
@@ -0,0 +1,510 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.List;
+import java.util.Map;
+
+import com.fasterxml.jackson.core.JsonFactory;
+import com.fasterxml.jackson.core.JsonParser;
+import com.fasterxml.jackson.core.JsonToken;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.slf4j.Logger;
+
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.services.AbfsPerfLoggable;
+import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema;
+import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
+
+/**
+ * Base Http operation class for orchestrating server IO calls. Child classes would
+ * define the certain orchestration implementation on the basis of network library used.
+ * <p>
+ * For JDK netlib usage, the child class would be {@link AbfsHttpOperation}. <br>
+ * For ApacheHttpClient netlib usage, the child class would be {@link AbfsAHCHttpOperation}.
+ * </p>
+ */
+public abstract class HttpOperation implements AbfsPerfLoggable {
+
+  private final Logger log;
+
+  private static final int CLEAN_UP_BUFFER_SIZE = 64 * 1024;
+
+  private static final int ONE_THOUSAND = 1000;
+
+  private static final int ONE_MILLION = ONE_THOUSAND * ONE_THOUSAND;
+
+  private String method;
+
+  private URL url;
+
+  private String maskedUrl;
+
+  private String maskedEncodedUrl;
+
+  private int statusCode;
+
+  private String statusDescription;
+
+  private String storageErrorCode = "";
+
+  private String storageErrorMessage = "";
+
+  private String requestId = "";
+
+  private String expectedAppendPos = "";
+
+  private ListResultSchema listResultSchema = null;
+
+  // metrics
+  private int bytesSent;
+
+  private int expectedBytesToBeSent;
+
+  private long bytesReceived;
+
+  private long connectionTimeMs;
+
+  private long sendRequestTimeMs;
+
+  private long recvResponseTimeMs;
+
+  private boolean shouldMask = false;
+
+  public HttpOperation(Logger logger,
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    this.log = logger;
+    this.url = url;
+    this.method = method;
+    this.statusCode = httpStatus;
+  }
+
+  public HttpOperation(final Logger log, final URL url, final String method) {
+    this.log = log;
+    this.url = url;
+    this.method = method;
+  }
+
+  public String getMethod() {
+    return method;
+  }
+
+  public String getHost() {
+    return url.getHost();
+  }
+
+  public int getStatusCode() {
+    return statusCode;
+  }
+
+  public String getStatusDescription() {
+    return statusDescription;
+  }
+
+  public String getStorageErrorCode() {
+    return storageErrorCode;
+  }
+
+  public String getStorageErrorMessage() {
+    return storageErrorMessage;
+  }
+
+  public abstract String getClientRequestId();
+
+  public String getExpectedAppendPos() {
+    return expectedAppendPos;
+  }
+
+  public String getRequestId() {
+    return requestId;
+  }
+
+  public void setMaskForSAS() {
+    shouldMask = true;
+  }
+
+  public int getBytesSent() {
+    return bytesSent;
+  }
+
+  public int getExpectedBytesToBeSent() {
+    return expectedBytesToBeSent;
+  }
+
+  public long getBytesReceived() {
+    return bytesReceived;
+  }
+
+  public URL getUrl() {
+    return url;
+  }
+
+  public ListResultSchema getListResultSchema() {
+    return listResultSchema;
+  }
+
+  public abstract String getResponseHeader(String httpHeader);
+
+  void setExpectedBytesToBeSent(int expectedBytesToBeSent) {
+    this.expectedBytesToBeSent = expectedBytesToBeSent;
+  }
+
+  void setStatusCode(int statusCode) {
+    this.statusCode = statusCode;
+  }
+
+  void setStatusDescription(String statusDescription) {
+    this.statusDescription = statusDescription;
+  }
+
+  void setBytesSent(int bytesSent) {
+    this.bytesSent = bytesSent;
+  }
+
+  void setSendRequestTimeMs(long sendRequestTimeMs) {
+    this.sendRequestTimeMs = sendRequestTimeMs;
+  }
+
+  void setRecvResponseTimeMs(long recvResponseTimeMs) {
+    this.recvResponseTimeMs = recvResponseTimeMs;
+  }
+
+  void setRequestId(String requestId) {
+    this.requestId = requestId;
+  }
+
+  void setConnectionTimeMs(long connectionTimeMs) {
+    this.connectionTimeMs = connectionTimeMs;
+  }
+
+  // Returns a trace message for the request
+  @Override
+  public String toString() {
+    final StringBuilder sb = new StringBuilder();
+    sb.append(statusCode);
+    sb.append(",");
+    sb.append(storageErrorCode);
+    sb.append(",");
+    sb.append(expectedAppendPos);
+    sb.append(",cid=");
+    sb.append(getClientRequestId());
+    sb.append(",rid=");
+    sb.append(requestId);
+    sb.append(",connMs=");
+    sb.append(connectionTimeMs);
+    sb.append(",sendMs=");
+    sb.append(sendRequestTimeMs);
+    sb.append(",recvMs=");
+    sb.append(recvResponseTimeMs);
+    sb.append(",sent=");
+    sb.append(bytesSent);
+    sb.append(",recv=");
+    sb.append(bytesReceived);
+    sb.append(",");
+    sb.append(method);
+    sb.append(",");
+    sb.append(getMaskedUrl());
+    return sb.toString();
+  }
+
+  // Returns a trace message for the ABFS API logging service to consume
+  public String getLogString() {
+
+    final StringBuilder sb = new StringBuilder();
+    sb.append("s=")
+        .append(statusCode)
+        .append(" e=")
+        .append(storageErrorCode)
+        .append(" ci=")
+        .append(getClientRequestId())
+        .append(" ri=")
+        .append(requestId)
+
+        .append(" ct=")
+        .append(connectionTimeMs)
+        .append(" st=")
+        .append(sendRequestTimeMs)
+        .append(" rt=")
+        .append(recvResponseTimeMs)
+
+        .append(" bs=")
+        .append(bytesSent)
+        .append(" br=")
+        .append(bytesReceived)
+        .append(" m=")
+        .append(method)
+        .append(" u=")
+        .append(getMaskedEncodedUrl());
+
+    return sb.toString();
+  }
+
+  public String getMaskedUrl() {
+    if (!shouldMask) {
+      return url.toString();
+    }
+    if (maskedUrl != null) {
+      return maskedUrl;
+    }
+    maskedUrl = UriUtils.getMaskedUrl(url);
+    return maskedUrl;
+  }
+
+  public String getMaskedEncodedUrl() {
+    if (maskedEncodedUrl != null) {
+      return maskedEncodedUrl;
+    }
+    maskedEncodedUrl = UriUtils.encodedUrlStr(getMaskedUrl());
+    return maskedEncodedUrl;
+  }
+
+  public abstract void sendPayload(byte[] buffer, int offset, int length) throws
+      IOException;
+
+  public abstract void processResponse(byte[] buffer,
+      int offset,
+      int length) throws IOException;
+
+  public abstract void setRequestProperty(String key, String value);
+
+  void parseResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    long startTime;
+    if (AbfsHttpConstants.HTTP_METHOD_HEAD.equals(this.method)) {
+      // If it is HEAD, and it is ERROR
+      return;
+    }
+
+    startTime = System.nanoTime();
+
+    if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) {
+      processStorageErrorResponse();
+      this.recvResponseTimeMs += elapsedTimeMs(startTime);
+      String contentLength = getResponseHeader(
+          HttpHeaderConfigurations.CONTENT_LENGTH);
+      if (contentLength != null) {
+        this.bytesReceived = Long.parseLong(contentLength);
+      } else {
+        this.bytesReceived = 0L;
+      }
+
+    } else {
+      // consume the input stream to release resources
+      int totalBytesRead = 0;
+
+      try (InputStream stream = getContentInputStream()) {
+        if (isNullInputStream(stream)) {
+          return;
+        }
+        boolean endOfStream = false;
+
+        // this is a list operation and need to retrieve the data
+        // need a better solution
+        if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method)
+            && buffer == null) {
+          parseListFilesResponse(stream);
+        } else {
+          if (buffer != null) {
+            while (totalBytesRead < length) {
+              int bytesRead = stream.read(buffer, offset + totalBytesRead,
+                  length
+                      - totalBytesRead);
+              if (bytesRead == -1) {
+                endOfStream = true;
+                break;
+              }
+              totalBytesRead += bytesRead;
+            }
+          }
+          if (!endOfStream && stream.read() != -1) {

Review Comment:
   I understand that the rename of AbfsHttpOperation to HttpOperation has generated this git difference. To mitigate confusion and reduce git difference, have now kept the abstract class name as AbfsHttpOperation, and child classes as AbfsAhcHttpOperation and AbfsJdkHttpOperation.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1540802548


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {

Review Comment:
   this null check can be used for both the conditions 
   finally {
           if (httpResponse != null) {
               try {
                   EntityUtils.consume(httpResponse.getEntity());
               } catch (IOException e) {         
               } finally {
                   if (httpResponse instanceof CloseableHttpResponse) {
                       try {
                           ((CloseableHttpResponse) httpResponse).close();
                       } catch (IOException e) {
                          
                       }
                   }
               }



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1544294674


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/kac/KeepAliveCache.java:
##########
@@ -0,0 +1,317 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services.kac;
+
+import java.io.IOException;
+import java.io.NotSerializableException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.http.HttpClientConnection;
+import org.apache.http.conn.routing.HttpRoute;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_MAX_CONN_SYS_PROP;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_MAX_CONN_SYS_PROP;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.KAC_CONN_TTL;
+
+/**
+ * Connection-pooling heuristics adapted from JDK's connection pooling `KeepAliveCache`
+ * <p>
+ * Why this implementation is required in comparison to {@link org.apache.http.impl.conn.PoolingHttpClientConnectionManager}
+ * connection-pooling:
+ * <ol>
+ * <li>PoolingHttpClientConnectionManager heuristic caches all the reusable connections it has created.
+ * JDK's implementation only caches limited number of connections. The limit is given by JVM system
+ * property "http.maxConnections". If there is no system-property, it defaults to 5.</li>
+ * <li>In PoolingHttpClientConnectionManager, it expects the application to provide `setMaxPerRoute` and `setMaxTotal`,
+ * which the implementation uses as the total number of connections it can create. For application using ABFS, it is not
+ * feasible to provide a value in the initialisation of the connectionManager. JDK's implementation has no cap on the
+ * number of connections it can create.</li>
+ * </ol>
+ */
+public final class KeepAliveCache
+    extends HashMap<KeepAliveCache.KeepAliveKey, KeepAliveCache.ClientVector>
+    implements Runnable {
+
+  private boolean threadShouldPause = true;
+
+  private boolean threadShouldRun = true;
+
+  private int maxConn;
+
+  private KeepAliveCache() {
+    Thread thread = new Thread(this);
+    thread.start();
+    setMaxConn();
+  }
+
+  private void setMaxConn() {
+    String sysPropMaxConn = System.getProperty(HTTP_MAX_CONN_SYS_PROP);
+    if (sysPropMaxConn == null) {
+      maxConn = DEFAULT_MAX_CONN_SYS_PROP;
+    } else {
+      maxConn = Integer.parseInt(sysPropMaxConn);
+    }
+  }
+
+  private static final KeepAliveCache INSTANCE = new KeepAliveCache();
+
+  @VisibleForTesting
+  void close() {
+    clear();
+    setMaxConn();
+  }
+
+  public static KeepAliveCache getInstance() {
+    return INSTANCE;
+  }
+
+  @VisibleForTesting
+  void pauseThread() {
+    threadShouldPause = false;
+  }
+
+  @VisibleForTesting
+  void resumeThread() {
+    threadShouldPause = true;
+  }
+
+  private int getKacSize() {
+    return INSTANCE.maxConn;
+  }
+
+  @Override
+  public void run() {
+    while (threadShouldRun) {

Review Comment:
   thats correct. taken.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1542348335


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {

Review Comment:
   should we close the context in the finally block ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1544300906


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {
+    abfsHttpClientContext = setFinalAbfsClientContext();
+    HttpResponse response = abfsApacheHttpClient.execute(httpRequestBase,
+        abfsHttpClientContext);
+    setConnectionTimeMs(abfsHttpClientContext.getConnectTime());
+    setSendRequestTimeMs(abfsHttpClientContext.getSendTime());
+    setRecvResponseTimeMs(abfsHttpClientContext.getReadTime());
+    return response;
+  }
+
+  private Map<String, List<String>> getResponseHeaders(final HttpResponse httpResponse) {
+    if (httpResponse == null || httpResponse.getAllHeaders() == null) {
+      return new HashMap<>();
+    }
+    Map<String, List<String>> map = new HashMap<>();
+    for (Header header : httpResponse.getAllHeaders()) {
+      map.put(header.getName(), new ArrayList<String>(
+          Collections.singleton(header.getValue())));
+    }
+    return map;
+  }
+
+  @Override
+  public void setRequestProperty(final String key, final String value) {
+    setHeader(key, value);
+  }
+
+  @Override
+  Map<String, List<String>> getRequestProperties() {
+    Map<String, List<String>> map = new HashMap<>();
+    for (AbfsHttpHeader header : requestHeaders) {
+      map.put(header.getName(),
+          new ArrayList<String>() {{
+            add(header.getValue());
+          }});
+    }
+    return map;
+  }
+
+  @Override
+  public String getResponseHeader(final String headerName) {
+    if (httpResponse == null) {
+      return null;
+    }
+    Header header = httpResponse.getFirstHeader(headerName);
+    if (header != null) {
+      return header.getValue();
+    }
+    return null;
+  }
+
+  @Override
+  InputStream getContentInputStream()
+      throws IOException {
+    if (httpResponse == null) {
+      return null;
+    }
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity != null) {
+      return httpResponse.getEntity().getContent();
+    }
+    return null;
+  }
+
+  public void sendPayload(final byte[] buffer,
+      final int offset,
+      final int length)
+      throws IOException {
+    if (!isPayloadRequest) {
+      return;
+    }
+
+    if (HTTP_METHOD_PUT.equals(getMethod())) {
+      httpRequestBase = new HttpPut(getUri());

Review Comment:
   taken.



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsConnectionManager.java:
##########
@@ -0,0 +1,162 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache;
+import org.apache.http.HttpClientConnection;
+import org.apache.http.config.Registry;
+import org.apache.http.config.SocketConfig;
+import org.apache.http.conn.ConnectionPoolTimeoutException;
+import org.apache.http.conn.ConnectionRequest;
+import org.apache.http.conn.HttpClientConnectionManager;
+import org.apache.http.conn.HttpClientConnectionOperator;
+import org.apache.http.conn.routing.HttpRoute;
+import org.apache.http.conn.socket.ConnectionSocketFactory;
+import org.apache.http.impl.conn.DefaultHttpClientConnectionOperator;
+import org.apache.http.impl.conn.ManagedHttpClientConnectionFactory;
+import org.apache.http.protocol.HttpContext;
+import org.apache.http.util.Asserts;
+
+/**
+ * AbfsConnectionManager is a custom implementation of {@link HttpClientConnectionManager}.
+ * This implementation manages connection-pooling heuristics and custom implementation
+ * of {@link ManagedHttpClientConnectionFactory}.
+ */
+public class AbfsConnectionManager implements HttpClientConnectionManager {
+
+  private final KeepAliveCache kac = KeepAliveCache.getInstance();
+
+  private final AbfsConnFactory httpConnectionFactory;
+
+  private final HttpClientConnectionOperator connectionOperator;
+
+  public AbfsConnectionManager(Registry<ConnectionSocketFactory> socketFactoryRegistry,
+      AbfsConnFactory connectionFactory) {
+    this.httpConnectionFactory = connectionFactory;
+    connectionOperator = new DefaultHttpClientConnectionOperator(
+        socketFactoryRegistry, null, null);
+  }
+
+  @Override
+  public ConnectionRequest requestConnection(final HttpRoute route,
+      final Object state) {
+    return new ConnectionRequest() {
+      @Override
+      public HttpClientConnection get(final long timeout,
+          final TimeUnit timeUnit)
+          throws InterruptedException, ExecutionException,
+          ConnectionPoolTimeoutException {
+        try {
+          HttpClientConnection client = kac.get(route);
+          if (client != null && client.isOpen()) {

Review Comment:
   taken.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1544301102


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;

Review Comment:
   taken.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1542783560


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/kac/TestApacheClientConnectionPool.java:
##########
@@ -0,0 +1,129 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services.kac;
+
+import java.io.IOException;
+
+import org.junit.Assert;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsTestWithTimeout;
+import org.apache.http.HttpClientConnection;
+import org.apache.http.HttpHost;
+import org.apache.http.conn.routing.HttpRoute;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_MAX_CONN_SYS_PROP;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_MAX_CONN_SYS_PROP;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.KAC_CONN_TTL;
+
+public class TestApacheClientConnectionPool extends
+    AbstractAbfsTestWithTimeout {
+
+  public TestApacheClientConnectionPool() throws Exception {
+    super();
+  }
+
+  @Test
+  public void testBasicPool() throws IOException {
+    System.clearProperty(HTTP_MAX_CONN_SYS_PROP);
+    validatePoolSize(DEFAULT_MAX_CONN_SYS_PROP);
+  }
+
+  @Test
+  public void testSysPropAppliedPool() throws IOException {
+    final String customPoolSize = "10";
+    System.setProperty(HTTP_MAX_CONN_SYS_PROP, customPoolSize);
+    validatePoolSize(Integer.parseInt(customPoolSize));
+  }
+
+  private void validatePoolSize(int size) throws IOException {
+    KeepAliveCache keepAliveCache = KeepAliveCache.getInstance();
+    final HttpRoute routes = new HttpRoute(new HttpHost("localhost"));
+    final HttpClientConnection[] connections = new HttpClientConnection[size * 2];
+
+    for (int i = 0; i < size * 2; i++) {
+      connections[i] = Mockito.mock(HttpClientConnection.class);
+    }
+
+    for (int i = 0; i < size * 2; i++) {
+      keepAliveCache.put(routes, connections[i]);
+    }
+
+    for (int i = size; i < size * 2; i++) {
+      Mockito.verify(connections[i], Mockito.times(1)).close();
+    }
+
+    for (int i = 0; i < size * 2; i++) {
+      if (i < size) {
+        Assert.assertNotNull(keepAliveCache.get(routes));
+      } else {
+        Assert.assertNull(keepAliveCache.get(routes));
+      }
+    }
+    System.clearProperty(HTTP_MAX_CONN_SYS_PROP);
+    keepAliveCache.close();
+  }
+
+  @Test
+  public void testKeepAliveCache() throws IOException {
+    KeepAliveCache keepAliveCache = KeepAliveCache.getInstance();
+    final HttpRoute routes = new HttpRoute(new HttpHost("localhost"));
+    HttpClientConnection connection = Mockito.mock(HttpClientConnection.class);
+
+    keepAliveCache.put(routes, connection);
+
+    Assert.assertNotNull(keepAliveCache.get(routes));
+    keepAliveCache.put(routes, connection);
+
+    final HttpRoute routes1 = new HttpRoute(new HttpHost("localhost1"));

Review Comment:
   didn't get where did we add this route in the cache ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1547318082


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {

Review Comment:
   This is no more required as the client is singleton now for the JVM process. Resolving it.



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {

Review Comment:
   This is no more required as the client is singleton now for the JVM process.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1546018583


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsReadWriteAndSeek.java:
##########
@@ -55,22 +56,47 @@ public class ITestAbfsReadWriteAndSeek extends AbstractAbfsScaleTest {
    * For test performance, a full x*y test matrix is not used.
    * @return the test parameters
    */
-  @Parameterized.Parameters(name = "Size={0}-readahead={1}")
+  @Parameterized.Parameters(name = "Size={0}-readahead={1}-Client={2}")
   public static Iterable<Object[]> sizes() {
-    return Arrays.asList(new Object[][]{{MIN_BUFFER_SIZE, true},
-        {DEFAULT_READ_BUFFER_SIZE, false},
-        {DEFAULT_READ_BUFFER_SIZE, true},
-        {APPENDBLOB_MAX_WRITE_BUFFER_SIZE, false},
-        {MAX_BUFFER_SIZE, true}});
+    return Arrays.asList(new Object[][]{

Review Comment:
   Taken.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1547343568


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/AbfsApacheHttpExpect100Exception.java:
##########
@@ -0,0 +1,36 @@
+/**

Review Comment:
   We would need some Exception class and cant use IOException, reason being, there is data we want to send in addition to the exception message. Kindly suggest please if we should have some other mechanism here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1535498363


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##########
@@ -1512,6 +1515,7 @@ String initializeUserAgent(final AbfsConfiguration abfsConfiguration,
       sb.append(HUNDRED_CONTINUE);
       sb.append(SEMICOLON);
     }
+    sb.append(" ").append(abfsConfiguration.getPreferredHttpOperationType()).append(";");

Review Comment:
   empty string and semicolon should be constants



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2029626654

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 22 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 49s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m  9s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/32/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 9 new + 18 unchanged - 0 fixed = 27 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m 11s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/32/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  33m 54s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 129m 26s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Class org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache defines non-transient non-serializable instance field keepAliveTimer  In KeepAliveCache.java:instance field keepAliveTimer  In KeepAliveCache.java |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/32/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux db108a5c7cd7 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9b98a13b526a1f741e2359ee9167949468aeec67 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/32/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/32/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2029707913

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 22 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 31s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 52s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/34/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 8 new + 18 unchanged - 0 fixed = 26 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m  8s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/34/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  33m 56s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 27s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 128m 58s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Class org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache defines non-transient non-serializable instance field keepAliveTimer  In KeepAliveCache.java:instance field keepAliveTimer  In KeepAliveCache.java |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/34/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 070b7d638a48 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 59ba9e5a7efa521a647672b3517f6c7ac236cbcf |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/34/testReport/ |
   | Max. process+thread count | 673 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/34/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1546017345


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ApacheHttpClientHealthMonitor.java:
##########
@@ -0,0 +1,33 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+
+public final class ApacheHttpClientHealthMonitor {

Review Comment:
   Wanted to encapsulate static methods related to apache usability in one class, hence not keeping in AbfsRestOperation, but having a separate static class. Open to removing it, but just wanted to discuss if we should keep or remove it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anujmodi2021 (via GitHub)" <gi...@apache.org>.
anujmodi2021 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1546081399


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {

Review Comment:
   Outdated. Please ignore



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1548855671


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/HttpOperation.java:
##########
@@ -0,0 +1,510 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.List;
+import java.util.Map;
+
+import com.fasterxml.jackson.core.JsonFactory;
+import com.fasterxml.jackson.core.JsonParser;
+import com.fasterxml.jackson.core.JsonToken;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.slf4j.Logger;
+
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.services.AbfsPerfLoggable;
+import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema;
+import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
+
+/**
+ * Base Http operation class for orchestrating server IO calls. Child classes would
+ * define the certain orchestration implementation on the basis of network library used.
+ * <p>
+ * For JDK netlib usage, the child class would be {@link AbfsHttpOperation}. <br>
+ * For ApacheHttpClient netlib usage, the child class would be {@link AbfsAHCHttpOperation}.
+ * </p>
+ */
+public abstract class HttpOperation implements AbfsPerfLoggable {
+
+  private final Logger log;
+
+  private static final int CLEAN_UP_BUFFER_SIZE = 64 * 1024;
+
+  private static final int ONE_THOUSAND = 1000;
+
+  private static final int ONE_MILLION = ONE_THOUSAND * ONE_THOUSAND;
+
+  private String method;
+
+  private URL url;
+
+  private String maskedUrl;
+
+  private String maskedEncodedUrl;
+
+  private int statusCode;
+
+  private String statusDescription;
+
+  private String storageErrorCode = "";
+
+  private String storageErrorMessage = "";
+
+  private String requestId = "";
+
+  private String expectedAppendPos = "";
+
+  private ListResultSchema listResultSchema = null;
+
+  // metrics
+  private int bytesSent;
+
+  private int expectedBytesToBeSent;
+
+  private long bytesReceived;
+
+  private long connectionTimeMs;
+
+  private long sendRequestTimeMs;
+
+  private long recvResponseTimeMs;
+
+  private boolean shouldMask = false;
+
+  public HttpOperation(Logger logger,
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    this.log = logger;
+    this.url = url;
+    this.method = method;
+    this.statusCode = httpStatus;
+  }
+
+  public HttpOperation(final Logger log, final URL url, final String method) {
+    this.log = log;
+    this.url = url;
+    this.method = method;
+  }
+
+  public String getMethod() {
+    return method;
+  }
+
+  public String getHost() {
+    return url.getHost();
+  }
+
+  public int getStatusCode() {
+    return statusCode;
+  }
+
+  public String getStatusDescription() {
+    return statusDescription;
+  }
+
+  public String getStorageErrorCode() {
+    return storageErrorCode;
+  }
+
+  public String getStorageErrorMessage() {
+    return storageErrorMessage;
+  }
+
+  public abstract String getClientRequestId();
+
+  public String getExpectedAppendPos() {
+    return expectedAppendPos;
+  }
+
+  public String getRequestId() {
+    return requestId;
+  }
+
+  public void setMaskForSAS() {
+    shouldMask = true;
+  }
+
+  public int getBytesSent() {
+    return bytesSent;
+  }
+
+  public int getExpectedBytesToBeSent() {
+    return expectedBytesToBeSent;
+  }
+
+  public long getBytesReceived() {
+    return bytesReceived;
+  }
+
+  public URL getUrl() {
+    return url;
+  }
+
+  public ListResultSchema getListResultSchema() {
+    return listResultSchema;
+  }
+
+  public abstract String getResponseHeader(String httpHeader);
+
+  void setExpectedBytesToBeSent(int expectedBytesToBeSent) {
+    this.expectedBytesToBeSent = expectedBytesToBeSent;
+  }
+
+  void setStatusCode(int statusCode) {
+    this.statusCode = statusCode;
+  }
+
+  void setStatusDescription(String statusDescription) {
+    this.statusDescription = statusDescription;
+  }
+
+  void setBytesSent(int bytesSent) {
+    this.bytesSent = bytesSent;
+  }
+
+  void setSendRequestTimeMs(long sendRequestTimeMs) {
+    this.sendRequestTimeMs = sendRequestTimeMs;
+  }
+
+  void setRecvResponseTimeMs(long recvResponseTimeMs) {
+    this.recvResponseTimeMs = recvResponseTimeMs;
+  }
+
+  void setRequestId(String requestId) {
+    this.requestId = requestId;
+  }
+
+  void setConnectionTimeMs(long connectionTimeMs) {
+    this.connectionTimeMs = connectionTimeMs;
+  }
+
+  // Returns a trace message for the request
+  @Override
+  public String toString() {
+    final StringBuilder sb = new StringBuilder();
+    sb.append(statusCode);
+    sb.append(",");
+    sb.append(storageErrorCode);
+    sb.append(",");
+    sb.append(expectedAppendPos);
+    sb.append(",cid=");
+    sb.append(getClientRequestId());
+    sb.append(",rid=");
+    sb.append(requestId);
+    sb.append(",connMs=");
+    sb.append(connectionTimeMs);
+    sb.append(",sendMs=");
+    sb.append(sendRequestTimeMs);
+    sb.append(",recvMs=");
+    sb.append(recvResponseTimeMs);
+    sb.append(",sent=");
+    sb.append(bytesSent);
+    sb.append(",recv=");
+    sb.append(bytesReceived);
+    sb.append(",");
+    sb.append(method);
+    sb.append(",");
+    sb.append(getMaskedUrl());
+    return sb.toString();
+  }
+
+  // Returns a trace message for the ABFS API logging service to consume
+  public String getLogString() {
+
+    final StringBuilder sb = new StringBuilder();
+    sb.append("s=")
+        .append(statusCode)
+        .append(" e=")
+        .append(storageErrorCode)
+        .append(" ci=")
+        .append(getClientRequestId())
+        .append(" ri=")
+        .append(requestId)
+
+        .append(" ct=")
+        .append(connectionTimeMs)
+        .append(" st=")
+        .append(sendRequestTimeMs)
+        .append(" rt=")
+        .append(recvResponseTimeMs)
+
+        .append(" bs=")
+        .append(bytesSent)
+        .append(" br=")
+        .append(bytesReceived)
+        .append(" m=")
+        .append(method)
+        .append(" u=")
+        .append(getMaskedEncodedUrl());
+
+    return sb.toString();
+  }
+
+  public String getMaskedUrl() {
+    if (!shouldMask) {
+      return url.toString();
+    }
+    if (maskedUrl != null) {
+      return maskedUrl;
+    }
+    maskedUrl = UriUtils.getMaskedUrl(url);
+    return maskedUrl;
+  }
+
+  public String getMaskedEncodedUrl() {
+    if (maskedEncodedUrl != null) {
+      return maskedEncodedUrl;
+    }
+    maskedEncodedUrl = UriUtils.encodedUrlStr(getMaskedUrl());
+    return maskedEncodedUrl;
+  }
+
+  public abstract void sendPayload(byte[] buffer, int offset, int length) throws
+      IOException;
+
+  public abstract void processResponse(byte[] buffer,
+      int offset,
+      int length) throws IOException;
+
+  public abstract void setRequestProperty(String key, String value);
+
+  void parseResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    long startTime;
+    if (AbfsHttpConstants.HTTP_METHOD_HEAD.equals(this.method)) {
+      // If it is HEAD, and it is ERROR
+      return;
+    }
+
+    startTime = System.nanoTime();
+
+    if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) {
+      processStorageErrorResponse();
+      this.recvResponseTimeMs += elapsedTimeMs(startTime);
+      String contentLength = getResponseHeader(
+          HttpHeaderConfigurations.CONTENT_LENGTH);
+      if (contentLength != null) {
+        this.bytesReceived = Long.parseLong(contentLength);
+      } else {
+        this.bytesReceived = 0L;
+      }
+
+    } else {
+      // consume the input stream to release resources
+      int totalBytesRead = 0;
+
+      try (InputStream stream = getContentInputStream()) {
+        if (isNullInputStream(stream)) {
+          return;
+        }
+        boolean endOfStream = false;
+
+        // this is a list operation and need to retrieve the data
+        // need a better solution
+        if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method)
+            && buffer == null) {
+          parseListFilesResponse(stream);
+        } else {
+          if (buffer != null) {
+            while (totalBytesRead < length) {
+              int bytesRead = stream.read(buffer, offset + totalBytesRead,
+                  length
+                      - totalBytesRead);
+              if (bytesRead == -1) {
+                endOfStream = true;
+                break;
+              }
+              totalBytesRead += bytesRead;
+            }
+          }
+          if (!endOfStream && stream.read() != -1) {
+            // read and discard
+            int bytesRead = 0;
+            byte[] b = new byte[CLEAN_UP_BUFFER_SIZE];
+            while ((bytesRead = stream.read(b)) >= 0) {
+              totalBytesRead += bytesRead;
+            }
+          }
+        }
+      } catch (IOException ex) {
+        log.warn("IO/Network error: {} {}: {}",
+            method, getMaskedUrl(), ex.getMessage());
+        log.debug("IO Error: ", ex);

Review Comment:
   I understand that the rename of AbfsHttpOperation to HttpOperation has generated this git difference. To mitigate confusion and reduce git difference, have now kept the abstract class name as AbfsHttpOperation, and child classes as AbfsAhcHttpOperation and AbfsJdkHttpOperation.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1548854470


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##########
@@ -1512,6 +1515,7 @@ String initializeUserAgent(final AbfsConfiguration abfsConfiguration,
       sb.append(HUNDRED_CONTINUE);
       sb.append(SEMICOLON);
     }
+    sb.append(" ").append(abfsConfiguration.getPreferredHttpOperationType()).append(";");

Review Comment:
   taken.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2003192702

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |  12m 30s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 18 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 45s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m  6s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 133 new + 18 unchanged - 0 fixed = 151 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 26s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/2/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15)  |
   | -1 :x: |  javadoc  |   0m 26s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/2/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15)  |
   | -1 :x: |  spotbugs  |   1m  8s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/2/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 22 new + 0 unchanged - 0 fixed = 22 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  33m 13s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 139m 18s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Unread field:AbfsAHCHttpOperation.java:[line 234] |
   |  |  Unread field:AbfsAHCHttpOperation.java:[line 235] |
   |  |  Unread field:AbfsAHCHttpOperation.java:[line 239] |
   |  |  Unread field:AbfsAHCHttpOperation.java:[line 240] |
   |  |  Unread field:AbfsAHCHttpOperation.java:[line 241] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 63] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 88] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 68] |
   |  |  Unused field:AbfsApacheHttpClient.java |
   |  |  Unused field:AbfsApacheHttpClient.java |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.isResponseAvailable(int)  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.isResponseAvailable(int)  At AbfsConnFactory.java:[line 92] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.receiveResponseHeader()  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.receiveResponseHeader()  At AbfsConnFactory.java:[line 113] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsConnFactory.java:[line 100] |
   |  |  Dead store to startTime in org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:[line 337] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE isn't final and can't be protected from malicious code  At KeepAliveCache.java:be protected from malicious code  At KeepAliveCache.java:[line 71] |
   |  |  Exception is caught when Exception is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:[line 131] |
   |  |  Class org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache defines non-transient non-serializable instance field thread  In KeepAliveCache.java:instance field thread  In KeepAliveCache.java |
   |  |  Write to static field org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:[line 47] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup() makes inefficient use of keySet iterator instead of entrySet iterator  At KeepAliveCache.java:keySet iterator instead of entrySet iterator  At KeepAliveCache.java:[line 106] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$ClientVector doesn't override java.util.Vector.equals(Object)  At KeepAliveCache.java:At KeepAliveCache.java:[line 1] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveEntry be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 247-250] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveKey be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 220-239] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/2/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 571aab0b709a 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 45ee37b5c6340359097bdc6d636c0dd2312e105c |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/2/testReport/ |
   | Max. process+thread count | 692 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2008933492

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 19 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m  0s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m 21s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/9/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 141 new + 18 unchanged - 0 fixed = 159 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 26s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/9/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 5 new + 15 unchanged - 0 fixed = 20 total (was 15)  |
   | -1 :x: |  javadoc  |   0m 26s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/9/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) |  hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08 with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 generated 5 new + 15 unchanged - 0 fixed = 20 total (was 15)  |
   | -1 :x: |  spotbugs  |   1m  8s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/9/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 18 new + 0 unchanged - 0 fixed = 18 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  34m 28s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 129m 56s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Unread field:AbfsConnectionManager.java:[line 113] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 65] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 107] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 70] |
   |  |  Unused field:AbfsApacheHttpClient.java |
   |  |  Unused field:AbfsApacheHttpClient.java |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.isResponseAvailable(int)  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.isResponseAvailable(int)  At AbfsConnFactory.java:[line 92] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.receiveResponseHeader()  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.receiveResponseHeader()  At AbfsConnFactory.java:[line 113] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsConnFactory.java:[line 100] |
   |  |  Dead store to startTime in org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:[line 337] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE isn't final and can't be protected from malicious code  At KeepAliveCache.java:be protected from malicious code  At KeepAliveCache.java:[line 71] |
   |  |  Exception is caught when Exception is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:[line 131] |
   |  |  Class org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache defines non-transient non-serializable instance field thread  In KeepAliveCache.java:instance field thread  In KeepAliveCache.java |
   |  |  Write to static field org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:[line 47] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup() makes inefficient use of keySet iterator instead of entrySet iterator  At KeepAliveCache.java:keySet iterator instead of entrySet iterator  At KeepAliveCache.java:[line 106] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$ClientVector doesn't override java.util.Vector.equals(Object)  At KeepAliveCache.java:At KeepAliveCache.java:[line 1] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveEntry be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 247-250] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveKey be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 220-239] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/9/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 189a5c2bb231 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9e8f1abff13ddbd2f1f617044fe1a27872b523ca |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/9/testReport/ |
   | Max. process+thread count | 551 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/9/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2003169616

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 18 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m 17s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  36m 38s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 162 new + 18 unchanged - 0 fixed = 180 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 25s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/1/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 2 new + 15 unchanged - 0 fixed = 17 total (was 15)  |
   | -1 :x: |  javadoc  |   0m 25s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/1/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) |  hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08 with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 generated 2 new + 15 unchanged - 0 fixed = 17 total (was 15)  |
   | -1 :x: |  spotbugs  |   1m 12s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/1/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 22 new + 0 unchanged - 0 fixed = 22 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  37m 51s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-azure in the patch passed.  |
   | -1 :x: |  asflicense  |   0m 37s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/1/artifact/out/results-asflicense.txt) |  The patch generated 12 ASF License warnings.  |
   |  |   | 135m 32s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Unread field:AbfsAHCHttpOperation.java:[line 218] |
   |  |  Unread field:AbfsAHCHttpOperation.java:[line 219] |
   |  |  Unread field:AbfsAHCHttpOperation.java:[line 223] |
   |  |  Unread field:AbfsAHCHttpOperation.java:[line 224] |
   |  |  Unread field:AbfsAHCHttpOperation.java:[line 225] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 69] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 94] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 74] |
   |  |  Unused field:AbfsApacheHttpClient.java |
   |  |  Unused field:AbfsApacheHttpClient.java |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.isResponseAvailable(int)  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.isResponseAvailable(int)  At AbfsConnFactory.java:[line 75] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.receiveResponseHeader()  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.receiveResponseHeader()  At AbfsConnFactory.java:[line 96] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsConnFactory.java:[line 83] |
   |  |  Dead store to startTime in org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:[line 337] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE isn't final and can't be protected from malicious code  At KeepAliveCache.java:be protected from malicious code  At KeepAliveCache.java:[line 54] |
   |  |  Exception is caught when Exception is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:[line 114] |
   |  |  Class org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache defines non-transient non-serializable instance field thread  In KeepAliveCache.java:instance field thread  In KeepAliveCache.java |
   |  |  Write to static field org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:[line 30] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup() makes inefficient use of keySet iterator instead of entrySet iterator  At KeepAliveCache.java:keySet iterator instead of entrySet iterator  At KeepAliveCache.java:[line 89] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$ClientVector doesn't override java.util.Vector.equals(Object)  At KeepAliveCache.java:At KeepAliveCache.java:[line 1] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveEntry be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 230-233] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveKey be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 203-222] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/1/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 745784578a50 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4c54a41390db61d45c951af44f07cb1a2863719a |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/1/testReport/ |
   | Max. process+thread count | 558 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2011330271

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 20 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 54s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  36m 16s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 22s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/16/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 29 new + 18 unchanged - 0 fixed = 47 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 30s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/16/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javadoc  |   0m 27s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/16/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  spotbugs  |   1m 25s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/16/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0)  |
   | -1 :x: |  shadedclient  |  36m 51s |  |  patch has errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 26s | [/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/16/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 27s |  |  ASF License check generated no output?  |
   |  |   | 133m  5s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Write to static field org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:[line 78] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$ClientVector doesn't override java.util.Vector.equals(Object)  At KeepAliveCache.java:At KeepAliveCache.java:[line 1] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/16/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux b06f41783d93 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2e1de4cb67cf0860e540ba50f716fd2cada3c08f |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/16/testReport/ |
   | Max. process+thread count | 701 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/16/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2011859573

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 21 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 22s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/artifact/out/branch-mvninstall-root.txt) |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 24s | [/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in trunk failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  compile  |   0m 22s | [/branch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in trunk failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/buildtool-branch-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/artifact/out/buildtool-branch-checkstyle-hadoop-tools_hadoop-azure.txt) |  The patch fails to run checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 23s | [/branch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/artifact/out/branch-mvnsite-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in trunk failed.  |
   | -1 :x: |  javadoc  |   4m 39s | [/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in trunk failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   0m 23s | [/branch-spotbugs-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/artifact/out/branch-spotbugs-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   7m 22s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |   7m 46s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 23s | [/patch-mvninstall-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 23s | [/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javac  |   0m 23s | [/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  compile  |   0m 24s | [/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  javac  |   0m 24s | [/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/buildtool-patch-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/artifact/out/buildtool-patch-checkstyle-hadoop-tools_hadoop-azure.txt) |  The patch fails to run checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 23s | [/patch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | -1 :x: |  javadoc  |   0m 30s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javadoc  |   0m 27s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  spotbugs  |   0m 24s | [/patch-spotbugs-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/artifact/out/patch-spotbugs-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  shadedclient  |   4m 47s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 19s | [/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not generate ASF License warnings.  |
   |  |   |  21m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6a75853cf379 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / bd9443d83f05e157eab1d9f2f5ac9503bf6e2624 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/testReport/ |
   | Max. process+thread count | 87 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/19/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1562502863


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java:
##########
@@ -20,370 +20,515 @@
 
 import java.io.IOException;
 import java.io.InputStream;
-import java.io.OutputStream;
 import java.net.HttpURLConnection;
-import java.net.ProtocolException;
 import java.net.URL;
+import java.util.ArrayList;
 import java.util.List;
 import java.util.Map;
 
-import javax.net.ssl.HttpsURLConnection;
-import javax.net.ssl.SSLSocketFactory;
-
-import org.apache.hadoop.classification.VisibleForTesting;
-import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
-
+import com.fasterxml.jackson.core.JsonFactory;
+import com.fasterxml.jackson.core.JsonParser;
+import com.fasterxml.jackson.core.JsonToken;
+import com.fasterxml.jackson.databind.ObjectMapper;
 import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
 
 import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
 import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
-
-import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.EXPECT_100_JDK_ERROR;
-import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HUNDRED_CONTINUE;
-import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.JDK_FALLBACK;
-import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.JDK_IMPL;
-import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.EXPECT;
+import org.apache.hadoop.fs.azurebfs.contracts.services.AbfsPerfLoggable;
+import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema;
+import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
 
 /**
- * Implementation of {@link HttpOperation} for orchestrating calls using JDK's HttpURLConnection.
+ * Base Http operation class for orchestrating server IO calls. Child classes would
+ * define the certain orchestration implementation on the basis of network library used.
+ * <p>
+ * For JDK netlib usage, the child class would be {@link AbfsJdkHttpOperation}. <br>
+ * For ApacheHttpClient netlib usage, the child class would be {@link AbfsAHCHttpOperation}.
+ * </p>
  */
-public class AbfsHttpOperation extends HttpOperation {
+public abstract class AbfsHttpOperation implements AbfsPerfLoggable {
+
+  private final Logger log;
+
+  private static final int CLEAN_UP_BUFFER_SIZE = 64 * 1024;
+
+  private static final int ONE_THOUSAND = 1000;
+
+  private static final int ONE_MILLION = ONE_THOUSAND * ONE_THOUSAND;
+
+  private final String method;
+  private final URL url;
+  private String maskedUrl;
+  private String maskedEncodedUrl;
+  private int statusCode;
+  private String statusDescription;
+  private String storageErrorCode = "";
+  private String storageErrorMessage = "";
+  private String requestId = "";
+  private String expectedAppendPos = "";
+  private ListResultSchema listResultSchema = null;
+
+  // metrics
+  private int bytesSent;
+  private int expectedBytesToBeSent;
+  private long bytesReceived;
 
-  private static final Logger LOG = LoggerFactory.getLogger(
-      AbfsHttpOperation.class);
+  private long connectionTimeMs;
+  private long sendRequestTimeMs;
+  private long recvResponseTimeMs;
+  private boolean shouldMask = false;
 
-  private HttpURLConnection connection;
+  private final List<AbfsHttpHeader> requestHeaders;
 
-  private boolean connectionDisconnectedOnError = false;
+  private final int connectionTimeout, readTimeout;
 
-  public static AbfsHttpOperation getAbfsHttpOperationWithFixedResult(
+  public AbfsHttpOperation(Logger logger,
       final URL url,
       final String method,
       final int httpStatus) {
-    AbfsHttpOperationWithFixedResult httpOp
-        = new AbfsHttpOperationWithFixedResult(url, method, httpStatus);
-    return httpOp;
+    this.log = logger;
+    this.url = url;
+    this.method = method;
+    this.statusCode = httpStatus;

Review Comment:
   shouldn't the status code come from connection response ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2014816239

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 22 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m 33s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 39s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m  0s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/22/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 2 new + 18 unchanged - 0 fixed = 20 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 27s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/22/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 16s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 133m 13s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/22/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 22907edd70fe 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 40c577e7dbaadbd420033090ca73e31245a56140 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/22/testReport/ |
   | Max. process+thread count | 723 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/22/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2011794862

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 21 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 23s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/18/artifact/out/branch-mvninstall-root.txt) |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 23s | [/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/18/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in trunk failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  compile  |   0m 23s | [/branch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/18/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in trunk failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/buildtool-branch-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/18/artifact/out/buildtool-branch-checkstyle-hadoop-tools_hadoop-azure.txt) |  The patch fails to run checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 23s | [/branch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/18/artifact/out/branch-mvnsite-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 23s | [/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/18/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in trunk failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javadoc  |   0m 23s | [/branch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/18/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in trunk failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  spotbugs  |   0m 24s | [/branch-spotbugs-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/18/artifact/out/branch-spotbugs-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   2m 44s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |   3m  8s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 22s | [/patch-mvninstall-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/18/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 38s | [/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/18/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javac  |   0m 38s | [/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/18/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  compile  |   0m 23s | [/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/18/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  javac  |   0m 23s | [/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/18/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 10s | [/buildtool-patch-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/18/artifact/out/buildtool-patch-checkstyle-hadoop-tools_hadoop-azure.txt) |  The patch fails to run checkstyle in hadoop-azure  |
   | +1 :green_heart: |  mvnsite  |   5m 10s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 36s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/18/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javadoc  |   0m  9s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/18/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | +1 :green_heart: |  spotbugs  |   1m 24s |  |  the patch passed  |
   | -1 :x: |  shadedclient  |   4m  2s |  |  patch has errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 18s | [/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/18/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 17s |  |  ASF License check generated no output?  |
   |  |   |  24m 15s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/18/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6094466a3ac3 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2f4c84119acc4359a834602c72c53a143ac1b598 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/18/testReport/ |
   | Max. process+thread count | 88 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/18/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2006936579

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 18 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 20s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m  1s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  35m 22s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/6/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 134 new + 18 unchanged - 0 fixed = 152 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 27s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/6/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15)  |
   | -1 :x: |  javadoc  |   0m 26s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/6/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) |  hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08 with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15)  |
   | -1 :x: |  spotbugs  |   1m 12s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/6/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 18 new + 0 unchanged - 0 fixed = 18 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  34m 42s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | -1 :x: |  unit  |   2m 27s | [/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/6/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 132m 21s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Unread field:AbfsConnectionManager.java:[line 113] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 63] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 88] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 68] |
   |  |  Unused field:AbfsApacheHttpClient.java |
   |  |  Unused field:AbfsApacheHttpClient.java |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.isResponseAvailable(int)  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.isResponseAvailable(int)  At AbfsConnFactory.java:[line 92] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.receiveResponseHeader()  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.receiveResponseHeader()  At AbfsConnFactory.java:[line 113] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsConnFactory.java:[line 100] |
   |  |  Dead store to startTime in org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:[line 337] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE isn't final and can't be protected from malicious code  At KeepAliveCache.java:be protected from malicious code  At KeepAliveCache.java:[line 71] |
   |  |  Exception is caught when Exception is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:[line 131] |
   |  |  Class org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache defines non-transient non-serializable instance field thread  In KeepAliveCache.java:instance field thread  In KeepAliveCache.java |
   |  |  Write to static field org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:[line 47] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup() makes inefficient use of keySet iterator instead of entrySet iterator  At KeepAliveCache.java:keySet iterator instead of entrySet iterator  At KeepAliveCache.java:[line 106] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$ClientVector doesn't override java.util.Vector.equals(Object)  At KeepAliveCache.java:At KeepAliveCache.java:[line 1] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveEntry be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 247-250] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveKey be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 220-239] |
   | Failed junit tests | hadoop.fs.azurebfs.services.TestApacheHttpClientFallback |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/6/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5d5a7355a53d 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 665adada2a48d9faff8a3f35e8885090c32391aa |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/6/testReport/ |
   | Max. process+thread count | 551 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2031137987

   > 1. Add more tests around verifying the sendTime and receiveTime for ManagedClientContext.
   > 2. Tests around connections in KAC getting stale and returning false.
   
   taken.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2031252460

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 22 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 44s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 29s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m 50s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/36/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 8 new + 18 unchanged - 0 fixed = 26 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m 34s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 23s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 130m 40s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/36/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux d2c1cc9280cb 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a420df6dcda06bdae58eb99adcfc660142daa125 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/36/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/36/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1544293126


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsConnectionManager.java:
##########
@@ -0,0 +1,162 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache;
+import org.apache.http.HttpClientConnection;
+import org.apache.http.config.Registry;
+import org.apache.http.config.SocketConfig;
+import org.apache.http.conn.ConnectionPoolTimeoutException;
+import org.apache.http.conn.ConnectionRequest;
+import org.apache.http.conn.HttpClientConnectionManager;
+import org.apache.http.conn.HttpClientConnectionOperator;
+import org.apache.http.conn.routing.HttpRoute;
+import org.apache.http.conn.socket.ConnectionSocketFactory;
+import org.apache.http.impl.conn.DefaultHttpClientConnectionOperator;
+import org.apache.http.impl.conn.ManagedHttpClientConnectionFactory;
+import org.apache.http.protocol.HttpContext;
+import org.apache.http.util.Asserts;
+
+/**
+ * AbfsConnectionManager is a custom implementation of {@link HttpClientConnectionManager}.
+ * This implementation manages connection-pooling heuristics and custom implementation
+ * of {@link ManagedHttpClientConnectionFactory}.
+ */
+public class AbfsConnectionManager implements HttpClientConnectionManager {
+
+  private final KeepAliveCache kac = KeepAliveCache.getInstance();
+
+  private final AbfsConnFactory httpConnectionFactory;
+
+  private final HttpClientConnectionOperator connectionOperator;
+
+  public AbfsConnectionManager(Registry<ConnectionSocketFactory> socketFactoryRegistry,
+      AbfsConnFactory connectionFactory) {
+    this.httpConnectionFactory = connectionFactory;
+    connectionOperator = new DefaultHttpClientConnectionOperator(
+        socketFactoryRegistry, null, null);
+  }
+
+  @Override
+  public ConnectionRequest requestConnection(final HttpRoute route,
+      final Object state) {
+    return new ConnectionRequest() {
+      @Override
+      public HttpClientConnection get(final long timeout,
+          final TimeUnit timeUnit)
+          throws InterruptedException, ExecutionException,
+          ConnectionPoolTimeoutException {
+        try {
+          HttpClientConnection client = kac.get(route);
+          if (client != null && client.isOpen()) {
+            return client;
+          }
+          return httpConnectionFactory.create(route, null);
+        } catch (IOException ex) {
+          throw new ExecutionException(ex);
+        }
+      }
+
+      @Override
+      public boolean cancel() {
+        return false;
+      }
+    };
+  }
+
+  /**
+   * Releases a connection for reuse. It can be reused only if validDuration is greater than 0.
+   * This method is called by {@link org.apache.http.impl.execchain} internal class `ConnectionHolder`.
+   * If it wants to reuse the connection, it will send a non-zero validDuration, else it will send 0.
+   * @param conn the connection to release
+   * @param newState the new state of the connection
+   * @param validDuration the duration for which the connection is valid
+   * @param timeUnit the time unit for the validDuration
+   */
+  @Override
+  public void releaseConnection(final HttpClientConnection conn,
+      final Object newState,
+      final long validDuration,
+      final TimeUnit timeUnit) {
+    if (validDuration == 0) {
+      return;
+    }
+    if (conn.isOpen() && conn instanceof AbfsManagedApacheHttpConnection) {
+      HttpRoute route = ((AbfsManagedApacheHttpConnection) conn).getHttpRoute();
+      if (route != null) {
+        kac.put(route, conn);
+      }
+    }
+  }
+
+  @Override
+  public void connect(final HttpClientConnection conn,
+      final HttpRoute route,
+      final int connectTimeout,
+      final HttpContext context) throws IOException {
+    Asserts.check(conn instanceof AbfsManagedApacheHttpConnection,
+        "Connection not obtained from this manager");
+    long start = System.currentTimeMillis();
+    connectionOperator.connect((AbfsManagedApacheHttpConnection) conn,
+        route.getTargetHost(), route.getLocalSocketAddress(),
+        connectTimeout, SocketConfig.DEFAULT, context);
+    if (context instanceof AbfsManagedHttpContext) {
+      ((AbfsManagedHttpContext) context).setConnectTime(
+          System.currentTimeMillis() - start);
+    }
+  }
+
+  @Override
+  public void upgrade(final HttpClientConnection conn,
+      final HttpRoute route,
+      final HttpContext context) throws IOException {
+    Asserts.check(conn instanceof AbfsManagedApacheHttpConnection,

Review Comment:
   Earlier this and the connection class was public, which can lead to outside of abfs, creation of connection. Have made them package-protected now. Don't need these assert check, would remove them.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1547316414


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/kac/KeepAliveCache.java:
##########
@@ -0,0 +1,317 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services.kac;
+
+import java.io.IOException;
+import java.io.NotSerializableException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.http.HttpClientConnection;
+import org.apache.http.conn.routing.HttpRoute;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_MAX_CONN_SYS_PROP;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_MAX_CONN_SYS_PROP;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.KAC_CONN_TTL;
+
+/**
+ * Connection-pooling heuristics adapted from JDK's connection pooling `KeepAliveCache`
+ * <p>
+ * Why this implementation is required in comparison to {@link org.apache.http.impl.conn.PoolingHttpClientConnectionManager}
+ * connection-pooling:
+ * <ol>
+ * <li>PoolingHttpClientConnectionManager heuristic caches all the reusable connections it has created.
+ * JDK's implementation only caches limited number of connections. The limit is given by JVM system
+ * property "http.maxConnections". If there is no system-property, it defaults to 5.</li>
+ * <li>In PoolingHttpClientConnectionManager, it expects the application to provide `setMaxPerRoute` and `setMaxTotal`,
+ * which the implementation uses as the total number of connections it can create. For application using ABFS, it is not
+ * feasible to provide a value in the initialisation of the connectionManager. JDK's implementation has no cap on the
+ * number of connections it can create.</li>
+ * </ol>
+ */
+public final class KeepAliveCache
+    extends HashMap<KeepAliveCache.KeepAliveKey, KeepAliveCache.ClientVector>
+    implements Runnable {
+
+  private boolean threadShouldPause = true;
+
+  private boolean threadShouldRun = true;
+
+  private int maxConn;
+
+  private KeepAliveCache() {
+    Thread thread = new Thread(this);
+    thread.start();
+    setMaxConn();
+  }
+
+  private void setMaxConn() {
+    String sysPropMaxConn = System.getProperty(HTTP_MAX_CONN_SYS_PROP);
+    if (sysPropMaxConn == null) {
+      maxConn = DEFAULT_MAX_CONN_SYS_PROP;
+    } else {
+      maxConn = Integer.parseInt(sysPropMaxConn);

Review Comment:
   If we go into implementation of Integer.parseInt(), it makes sense to not check null as it is done internally. But, it requires developer to understand that this will happen which may create confusion later, it is better we do null-check on our side, and send only valid values to the external method. What you feel @anujmodi2021.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1544236402


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {

Review Comment:
   taken.



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {
+    abfsHttpClientContext = setFinalAbfsClientContext();
+    HttpResponse response = abfsApacheHttpClient.execute(httpRequestBase,
+        abfsHttpClientContext);
+    setConnectionTimeMs(abfsHttpClientContext.getConnectTime());
+    setSendRequestTimeMs(abfsHttpClientContext.getSendTime());
+    setRecvResponseTimeMs(abfsHttpClientContext.getReadTime());
+    return response;
+  }
+
+  private Map<String, List<String>> getResponseHeaders(final HttpResponse httpResponse) {
+    if (httpResponse == null || httpResponse.getAllHeaders() == null) {
+      return new HashMap<>();
+    }
+    Map<String, List<String>> map = new HashMap<>();
+    for (Header header : httpResponse.getAllHeaders()) {
+      map.put(header.getName(), new ArrayList<String>(
+          Collections.singleton(header.getValue())));
+    }
+    return map;
+  }
+
+  @Override
+  public void setRequestProperty(final String key, final String value) {
+    setHeader(key, value);
+  }
+
+  @Override
+  Map<String, List<String>> getRequestProperties() {
+    Map<String, List<String>> map = new HashMap<>();
+    for (AbfsHttpHeader header : requestHeaders) {
+      map.put(header.getName(),
+          new ArrayList<String>() {{
+            add(header.getValue());
+          }});
+    }
+    return map;
+  }
+
+  @Override
+  public String getResponseHeader(final String headerName) {
+    if (httpResponse == null) {
+      return null;
+    }
+    Header header = httpResponse.getFirstHeader(headerName);
+    if (header != null) {
+      return header.getValue();
+    }
+    return null;
+  }
+
+  @Override
+  InputStream getContentInputStream()

Review Comment:
   taken.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2027056482

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 22 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 20s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 38s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m 58s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 19s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/26/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 5 new + 18 unchanged - 0 fixed = 23 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m 11s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/26/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  33m 54s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 129m 54s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Possible doublecheck on org.apache.hadoop.fs.azurebfs.services.AbfsAHCHttpOperation.ABFS_APACHE_HTTP_CLIENT in org.apache.hadoop.fs.azurebfs.services.AbfsAHCHttpOperation.setAbfsApacheHttpClient(AbfsConfiguration)  At AbfsAHCHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsAHCHttpOperation.setAbfsApacheHttpClient(AbfsConfiguration)  At AbfsAHCHttpOperation.java:[lines 125-127] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/26/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 4e34a5f6c3e0 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2d2c4948283ef595985de917642f7c1199ad4905 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/26/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/26/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1542816451


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ApacheHttpClientHealthMonitor.java:
##########
@@ -0,0 +1,33 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+
+public final class ApacheHttpClientHealthMonitor {

Review Comment:
   I don't think we need a separate class for this



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1540807377


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {
+    abfsHttpClientContext = setFinalAbfsClientContext();
+    HttpResponse response = abfsApacheHttpClient.execute(httpRequestBase,
+        abfsHttpClientContext);
+    setConnectionTimeMs(abfsHttpClientContext.getConnectTime());
+    setSendRequestTimeMs(abfsHttpClientContext.getSendTime());
+    setRecvResponseTimeMs(abfsHttpClientContext.getReadTime());
+    return response;
+  }
+
+  private Map<String, List<String>> getResponseHeaders(final HttpResponse httpResponse) {
+    if (httpResponse == null || httpResponse.getAllHeaders() == null) {
+      return new HashMap<>();
+    }
+    Map<String, List<String>> map = new HashMap<>();
+    for (Header header : httpResponse.getAllHeaders()) {
+      map.put(header.getName(), new ArrayList<String>(
+          Collections.singleton(header.getValue())));
+    }
+    return map;
+  }
+
+  @Override
+  public void setRequestProperty(final String key, final String value) {
+    setHeader(key, value);
+  }
+
+  @Override
+  Map<String, List<String>> getRequestProperties() {
+    Map<String, List<String>> map = new HashMap<>();
+    for (AbfsHttpHeader header : requestHeaders) {
+      map.put(header.getName(),
+          new ArrayList<String>() {{
+            add(header.getValue());
+          }});
+    }
+    return map;
+  }
+
+  @Override
+  public String getResponseHeader(final String headerName) {
+    if (httpResponse == null) {
+      return null;
+    }
+    Header header = httpResponse.getFirstHeader(headerName);
+    if (header != null) {
+      return header.getValue();
+    }
+    return null;
+  }
+
+  @Override
+  InputStream getContentInputStream()
+      throws IOException {
+    if (httpResponse == null) {
+      return null;
+    }
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity != null) {
+      return httpResponse.getEntity().getContent();
+    }
+    return null;
+  }
+
+  public void sendPayload(final byte[] buffer,
+      final int offset,
+      final int length)
+      throws IOException {
+    if (!isPayloadRequest) {
+      return;
+    }
+
+    if (HTTP_METHOD_PUT.equals(getMethod())) {
+      httpRequestBase = new HttpPut(getUri());
+    }
+    if (HTTP_METHOD_PATCH.equals(getMethod())) {
+      httpRequestBase = new HttpPatch(getUri());
+    }
+    if (HTTP_METHOD_POST.equals(getMethod())) {
+      httpRequestBase = new HttpPost(getUri());
+    }
+
+    setExpectedBytesToBeSent(length);
+    if (buffer != null) {
+      HttpEntity httpEntity = new ByteArrayEntity(buffer, offset, length,
+          TEXT_PLAIN);
+      ((HttpEntityEnclosingRequestBase) httpRequestBase).setEntity(
+          httpEntity);
+    }
+
+    translateHeaders(httpRequestBase, requestHeaders);
+    try {
+      httpResponse = executeRequest();
+    } catch (AbfsApacheHttpExpect100Exception ex) {
+      LOG.debug(

Review Comment:
   add the clientId or some other info in the exception message



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2034501601

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 20 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 43s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m  5s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/46/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 12 new + 18 unchanged - 0 fixed = 30 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 33s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 27s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 130m 10s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/46/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 436b44784dd0 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fad4628f478b4e06fbb896303457408a66789c59 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/46/testReport/ |
   | Max. process+thread count | 554 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/46/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1540786282


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>

Review Comment:
   What is the idea behind this design, currently we don't have any relation between jdk client and abfsclient. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2025139312

   1. Add more tests around verifying the sendTime and receiveTime for ManagedClientContext.
   2. Tests around connections in KAC getting stale and returning false.
   3. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1540811781


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientThrottlingIntercept.java:
##########
@@ -170,7 +170,7 @@ public void updateMetrics(AbfsRestOperationType operationType,
         }
         break;
       case ReadFile:
-        String range = abfsHttpOperation.getConnection().getRequestProperty(HttpHeaderConfigurations.RANGE);
+        String range = abfsHttpOperation.getRequestProperty(HttpHeaderConfigurations.RANGE);

Review Comment:
   what is the abfsHttpOperation is of the type AbfsRestOperation, getConnection() removal will give error



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1538612494


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {

Review Comment:
   If requestId is null, why should we register that request ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anujmodi2021 (via GitHub)" <gi...@apache.org>.
anujmodi2021 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1546081399


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {

Review Comment:
   Outdated.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2029222808

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 22 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 23s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/artifact/out/branch-mvninstall-root.txt) |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 22s | [/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in trunk failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  compile  |   0m 23s | [/branch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in trunk failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -0 :warning: |  checkstyle  |   0m 12s | [/buildtool-branch-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/artifact/out/buildtool-branch-checkstyle-hadoop-tools_hadoop-azure.txt) |  The patch fails to run checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 24s | [/branch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/artifact/out/branch-mvnsite-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 23s | [/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in trunk failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javadoc  |   0m 24s | [/branch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in trunk failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  spotbugs  |   0m 23s | [/branch-spotbugs-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/artifact/out/branch-spotbugs-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in trunk failed.  |
   | -1 :x: |  shadedclient  |   4m 17s |  |  branch has errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |   4m 41s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 23s | [/patch-mvninstall-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   3m 25s | [/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javac  |   3m 25s | [/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  compile  |   0m 42s | [/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  javac  |   0m 42s | [/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/buildtool-patch-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/artifact/out/buildtool-patch-checkstyle-hadoop-tools_hadoop-azure.txt) |  The patch fails to run checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 44s | [/patch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | -1 :x: |  javadoc  |   0m 32s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 15 new + 0 unchanged - 0 fixed = 15 total (was 0)  |
   | -1 :x: |  javadoc  |   0m  8s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  spotbugs  |   0m 23s | [/patch-spotbugs-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/artifact/out/patch-spotbugs-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  shadedclient  |   4m 16s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 23s | [/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 24s |  |  ASF License check generated no output?  |
   |  |   |  21m 28s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux ba88abe9eaa5 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 252b274505ffc17484623ab02ad5b20b5f34d61f |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/testReport/ |
   | Max. process+thread count | 76 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/30/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anujmodi2021 (via GitHub)" <gi...@apache.org>.
anujmodi2021 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1546081799


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {

Review Comment:
   Outdated. Please ignore



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2032001807

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 21 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 21s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 27s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 48s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/41/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 9 new + 18 unchanged - 0 fixed = 27 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 14s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 32s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 129m 55s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/41/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux f85836cea0f9 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 241bce05039156d7e55b4c5c6b32acca4c9656ef |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/41/testReport/ |
   | Max. process+thread count | 551 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/41/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2029161881

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |  15m 11s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 22 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 52s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/artifact/out/branch-mvninstall-root.txt) |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 24s | [/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in trunk failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  compile  |   0m 23s | [/branch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in trunk failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -0 :warning: |  checkstyle  |   0m 22s | [/buildtool-branch-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/artifact/out/buildtool-branch-checkstyle-hadoop-tools_hadoop-azure.txt) |  The patch fails to run checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 23s | [/branch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/artifact/out/branch-mvnsite-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 23s | [/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in trunk failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javadoc  |   0m 23s | [/branch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in trunk failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  spotbugs  |   0m 24s | [/branch-spotbugs-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/artifact/out/branch-spotbugs-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   2m 47s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |   3m 11s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 23s | [/patch-mvninstall-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 22s | [/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javac  |   0m 22s | [/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  compile  |   3m 45s | [/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  javac  |   3m 45s | [/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/buildtool-patch-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/artifact/out/buildtool-patch-checkstyle-hadoop-tools_hadoop-azure.txt) |  The patch fails to run checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 22s | [/patch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | -1 :x: |  javadoc  |   0m 22s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javadoc  |   0m 58s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  spotbugs  |   0m 23s | [/patch-spotbugs-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/artifact/out/patch-spotbugs-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | -1 :x: |  shadedclient  |   5m  5s |  |  patch has errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 24s | [/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 17s |  |  ASF License check generated no output?  |
   |  |   |  36m  1s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux c80a3c544458 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a854e540fd1e932563e450ecd6b9825be3c7e15c |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/testReport/ |
   | Max. process+thread count | 51 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/28/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2034302508

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 20 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 54s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m 13s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/45/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 12 new + 18 unchanged - 0 fixed = 30 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 37s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 131m  9s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/45/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 2afd62b1519c 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 21e1200c6de8594e9816721058eeb9bf0624a4d3 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/45/testReport/ |
   | Max. process+thread count | 719 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/45/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1548854200


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsApacheHttpClient.java:
##########
@@ -0,0 +1,93 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.config.RequestConfig;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.config.Registry;
+import org.apache.http.config.RegistryBuilder;
+import org.apache.http.conn.socket.ConnectionSocketFactory;
+import org.apache.http.conn.socket.PlainConnectionSocketFactory;
+import org.apache.http.conn.ssl.SSLConnectionSocketFactory;
+import org.apache.http.impl.client.CloseableHttpClient;
+import org.apache.http.impl.client.HttpClientBuilder;
+import org.apache.http.impl.client.HttpClients;
+
+import static org.apache.http.conn.ssl.SSLConnectionSocketFactory.getDefaultHostnameVerifier;
+
+public class AbfsApacheHttpClient {
+  private final CloseableHttpClient httpClient;
+
+  private final AbfsConfiguration abfsConfiguration;
+
+  public AbfsApacheHttpClient(DelegatingSSLSocketFactory delegatingSSLSocketFactory,
+      final AbfsConfiguration abfsConfiguration) {
+    this.abfsConfiguration = abfsConfiguration;
+    final AbfsConnectionManager connMgr = new AbfsConnectionManager(
+        createSocketFactoryRegistry(
+            new SSLConnectionSocketFactory(delegatingSSLSocketFactory,
+                getDefaultHostnameVerifier())),
+        new org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory());
+    final HttpClientBuilder builder = HttpClients.custom();
+    builder.setConnectionManager(connMgr)
+        .setRequestExecutor(new AbfsManagedHttpRequestExecutor(
+            abfsConfiguration.getHttpReadTimeout()))
+        .disableContentCompression()
+        .disableRedirectHandling()
+        .disableAutomaticRetries()
+        .setUserAgent(
+            ""); // SDK will set the user agent header in the pipeline. Don't let Apache waste time

Review Comment:
   Have fixed the comment.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2033984569

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 21 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 20s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 48s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m  9s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/43/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 11 new + 18 unchanged - 0 fixed = 29 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 36s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 131m 21s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/43/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6641b7a5fc76 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6cd01b500b900ed7aec236fd9a8091f35f76e810 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/43/testReport/ |
   | Max. process+thread count | 558 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/43/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1535488876


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsApacheHttpClient.java:
##########
@@ -0,0 +1,93 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.config.RequestConfig;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.config.Registry;
+import org.apache.http.config.RegistryBuilder;
+import org.apache.http.conn.socket.ConnectionSocketFactory;
+import org.apache.http.conn.socket.PlainConnectionSocketFactory;
+import org.apache.http.conn.ssl.SSLConnectionSocketFactory;
+import org.apache.http.impl.client.CloseableHttpClient;
+import org.apache.http.impl.client.HttpClientBuilder;
+import org.apache.http.impl.client.HttpClients;
+
+import static org.apache.http.conn.ssl.SSLConnectionSocketFactory.getDefaultHostnameVerifier;
+
+public class AbfsApacheHttpClient {
+  private final CloseableHttpClient httpClient;
+
+  private final AbfsConfiguration abfsConfiguration;
+
+  public AbfsApacheHttpClient(DelegatingSSLSocketFactory delegatingSSLSocketFactory,
+      final AbfsConfiguration abfsConfiguration) {
+    this.abfsConfiguration = abfsConfiguration;
+    final AbfsConnectionManager connMgr = new AbfsConnectionManager(
+        createSocketFactoryRegistry(
+            new SSLConnectionSocketFactory(delegatingSSLSocketFactory,
+                getDefaultHostnameVerifier())),
+        new org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory());

Review Comment:
   full qualified package name used



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1535437602


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {
+    abfsHttpClientContext = setFinalAbfsClientContext();
+    HttpResponse response = abfsApacheHttpClient.execute(httpRequestBase,
+        abfsHttpClientContext);
+    setConnectionTimeMs(abfsHttpClientContext.getConnectTime());
+    setSendRequestTimeMs(abfsHttpClientContext.getSendTime());
+    setRecvResponseTimeMs(abfsHttpClientContext.getReadTime());
+    return response;
+  }
+
+  private Map<String, List<String>> getResponseHeaders(final HttpResponse httpResponse) {
+    if (httpResponse == null || httpResponse.getAllHeaders() == null) {
+      return new HashMap<>();
+    }
+    Map<String, List<String>> map = new HashMap<>();
+    for (Header header : httpResponse.getAllHeaders()) {
+      map.put(header.getName(), new ArrayList<String>(
+          Collections.singleton(header.getValue())));
+    }
+    return map;
+  }
+
+  @Override
+  public void setRequestProperty(final String key, final String value) {
+    setHeader(key, value);
+  }
+
+  @Override
+  Map<String, List<String>> getRequestProperties() {
+    Map<String, List<String>> map = new HashMap<>();
+    for (AbfsHttpHeader header : requestHeaders) {
+      map.put(header.getName(),
+          new ArrayList<String>() {{
+            add(header.getValue());
+          }});
+    }
+    return map;
+  }
+
+  @Override
+  public String getResponseHeader(final String headerName) {
+    if (httpResponse == null) {
+      return null;
+    }
+    Header header = httpResponse.getFirstHeader(headerName);
+    if (header != null) {
+      return header.getValue();
+    }
+    return null;
+  }
+
+  @Override
+  InputStream getContentInputStream()
+      throws IOException {
+    if (httpResponse == null) {
+      return null;
+    }
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity != null) {
+      return httpResponse.getEntity().getContent();
+    }
+    return null;
+  }
+
+  public void sendPayload(final byte[] buffer,
+      final int offset,
+      final int length)
+      throws IOException {
+    if (!isPayloadRequest) {
+      return;
+    }
+
+    if (HTTP_METHOD_PUT.equals(getMethod())) {
+      httpRequestBase = new HttpPut(getUri());
+    }
+    if (HTTP_METHOD_PATCH.equals(getMethod())) {
+      httpRequestBase = new HttpPatch(getUri());
+    }
+    if (HTTP_METHOD_POST.equals(getMethod())) {
+      httpRequestBase = new HttpPost(getUri());
+    }
+
+    setExpectedBytesToBeSent(length);
+    if (buffer != null) {
+      HttpEntity httpEntity = new ByteArrayEntity(buffer, offset, length,
+          TEXT_PLAIN);
+      ((HttpEntityEnclosingRequestBase) httpRequestBase).setEntity(
+          httpEntity);
+    }
+
+    translateHeaders(httpRequestBase, requestHeaders);
+    try {
+      httpResponse = executeRequest();
+    } catch (AbfsApacheHttpExpect100Exception ex) {
+      LOG.debug(
+          "Getting output stream failed with expect header enabled, returning back ",
+          ex);
+      connectionDisconnectedOnError = true;
+      httpResponse = ex.getHttpResponse();
+      abfsApacheHttpExpect100Exception = ex;
+    } finally {
+      if (!connectionDisconnectedOnError
+          && httpRequestBase instanceof HttpEntityEnclosingRequestBase) {

Review Comment:
   close the httpRequestBase in the finally block



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {
+    abfsHttpClientContext = setFinalAbfsClientContext();
+    HttpResponse response = abfsApacheHttpClient.execute(httpRequestBase,
+        abfsHttpClientContext);
+    setConnectionTimeMs(abfsHttpClientContext.getConnectTime());
+    setSendRequestTimeMs(abfsHttpClientContext.getSendTime());
+    setRecvResponseTimeMs(abfsHttpClientContext.getReadTime());
+    return response;
+  }
+
+  private Map<String, List<String>> getResponseHeaders(final HttpResponse httpResponse) {
+    if (httpResponse == null || httpResponse.getAllHeaders() == null) {
+      return new HashMap<>();
+    }
+    Map<String, List<String>> map = new HashMap<>();
+    for (Header header : httpResponse.getAllHeaders()) {
+      map.put(header.getName(), new ArrayList<String>(
+          Collections.singleton(header.getValue())));
+    }
+    return map;
+  }
+
+  @Override
+  public void setRequestProperty(final String key, final String value) {
+    setHeader(key, value);
+  }
+
+  @Override
+  Map<String, List<String>> getRequestProperties() {
+    Map<String, List<String>> map = new HashMap<>();
+    for (AbfsHttpHeader header : requestHeaders) {
+      map.put(header.getName(),
+          new ArrayList<String>() {{
+            add(header.getValue());
+          }});
+    }
+    return map;
+  }
+
+  @Override
+  public String getResponseHeader(final String headerName) {
+    if (httpResponse == null) {
+      return null;
+    }
+    Header header = httpResponse.getFirstHeader(headerName);
+    if (header != null) {
+      return header.getValue();
+    }
+    return null;
+  }
+
+  @Override
+  InputStream getContentInputStream()
+      throws IOException {
+    if (httpResponse == null) {
+      return null;
+    }
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity != null) {
+      return httpResponse.getEntity().getContent();
+    }
+    return null;
+  }
+
+  public void sendPayload(final byte[] buffer,
+      final int offset,
+      final int length)
+      throws IOException {
+    if (!isPayloadRequest) {
+      return;
+    }
+
+    if (HTTP_METHOD_PUT.equals(getMethod())) {
+      httpRequestBase = new HttpPut(getUri());
+    }
+    if (HTTP_METHOD_PATCH.equals(getMethod())) {
+      httpRequestBase = new HttpPatch(getUri());
+    }
+    if (HTTP_METHOD_POST.equals(getMethod())) {
+      httpRequestBase = new HttpPost(getUri());
+    }
+
+    setExpectedBytesToBeSent(length);
+    if (buffer != null) {
+      HttpEntity httpEntity = new ByteArrayEntity(buffer, offset, length,
+          TEXT_PLAIN);
+      ((HttpEntityEnclosingRequestBase) httpRequestBase).setEntity(
+          httpEntity);
+    }
+
+    translateHeaders(httpRequestBase, requestHeaders);
+    try {
+      httpResponse = executeRequest();
+    } catch (AbfsApacheHttpExpect100Exception ex) {
+      LOG.debug(
+          "Getting output stream failed with expect header enabled, returning back ",
+          ex);
+      connectionDisconnectedOnError = true;
+      httpResponse = ex.getHttpResponse();
+      abfsApacheHttpExpect100Exception = ex;
+    } finally {
+      if (!connectionDisconnectedOnError
+          && httpRequestBase instanceof HttpEntityEnclosingRequestBase) {
+        setBytesSent(length);
+      }
+    }
+  }
+
+  private void prepareRequest() throws IOException {

Review Comment:
   use switch case 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1544235470


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientThrottlingIntercept.java:
##########
@@ -170,7 +170,7 @@ public void updateMetrics(AbfsRestOperationType operationType,
         }
         break;
       case ReadFile:
-        String range = abfsHttpOperation.getConnection().getRequestProperty(HttpHeaderConfigurations.RANGE);
+        String range = abfsHttpOperation.getRequestProperty(HttpHeaderConfigurations.RANGE);

Review Comment:
   The name of variable was wrong, have refactored it to `httpOperation`. It is of type `HttpOperation`. `getRequestProperty` is an abstract method which the AbfsHttpOperation and AbfsAHCOperation would implement. The JDK's approach class `AbfsHttpOperation` will implement it as getConnection().getRequestProperty(key)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2009879439

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 20 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 12s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  38m 35s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 22s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/14/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 104 new + 18 unchanged - 0 fixed = 122 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 28s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/14/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javadoc  |   0m 25s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/14/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08.  |
   | -1 :x: |  spotbugs  |   1m 14s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/14/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 18 new + 0 unchanged - 0 fixed = 18 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  38m 24s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 27s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 139m 28s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Dead store to startTime in org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:[line 330] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.isResponseAvailable(int)  At AbfsManagedApacheHttpConnection.java:org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.isResponseAvailable(int)  At AbfsManagedApacheHttpConnection.java:[line 88] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.receiveResponseHeader()  At AbfsManagedApacheHttpConnection.java:org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.receiveResponseHeader()  At AbfsManagedApacheHttpConnection.java:[line 109] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsManagedApacheHttpConnection.java:org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsManagedApacheHttpConnection.java:[line 96] |
   |  |  Unread field:AbfsConnectionManager.java:[line 124] |
   |  |  Unread field:AbfsManagedHttpContext.java:[line 34] |
   |  |  Unread field:AbfsManagedHttpRequestExecutor.java:[line 57] |
   |  |  Unread field:AbfsManagedHttpContext.java:[line 40] |
   |  |  Unused field:AbfsManagedHttpContext.java |
   |  |  Unused field:AbfsManagedHttpContext.java |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE isn't final and can't be protected from malicious code  At KeepAliveCache.java:be protected from malicious code  At KeepAliveCache.java:[line 88] |
   |  |  Exception is caught when Exception is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:[line 148] |
   |  |  Class org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache defines non-transient non-serializable instance field thread  In KeepAliveCache.java:instance field thread  In KeepAliveCache.java |
   |  |  Write to static field org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:[line 64] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup() makes inefficient use of keySet iterator instead of entrySet iterator  At KeepAliveCache.java:keySet iterator instead of entrySet iterator  At KeepAliveCache.java:[line 123] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$ClientVector doesn't override java.util.Vector.equals(Object)  At KeepAliveCache.java:At KeepAliveCache.java:[line 1] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveEntry be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 264-267] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveKey be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 237-256] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/14/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 32f4e32bb829 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 82ada2e448a02204c252aaab12940441822deac9 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/14/testReport/ |
   | Max. process+thread count | 622 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/14/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1535496335


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsApacheHttpClient.java:
##########
@@ -0,0 +1,93 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.config.RequestConfig;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.config.Registry;
+import org.apache.http.config.RegistryBuilder;
+import org.apache.http.conn.socket.ConnectionSocketFactory;
+import org.apache.http.conn.socket.PlainConnectionSocketFactory;
+import org.apache.http.conn.ssl.SSLConnectionSocketFactory;
+import org.apache.http.impl.client.CloseableHttpClient;
+import org.apache.http.impl.client.HttpClientBuilder;
+import org.apache.http.impl.client.HttpClients;
+
+import static org.apache.http.conn.ssl.SSLConnectionSocketFactory.getDefaultHostnameVerifier;
+
+public class AbfsApacheHttpClient {
+  private final CloseableHttpClient httpClient;
+
+  private final AbfsConfiguration abfsConfiguration;
+
+  public AbfsApacheHttpClient(DelegatingSSLSocketFactory delegatingSSLSocketFactory,
+      final AbfsConfiguration abfsConfiguration) {
+    this.abfsConfiguration = abfsConfiguration;
+    final AbfsConnectionManager connMgr = new AbfsConnectionManager(
+        createSocketFactoryRegistry(
+            new SSLConnectionSocketFactory(delegatingSSLSocketFactory,
+                getDefaultHostnameVerifier())),
+        new org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory());
+    final HttpClientBuilder builder = HttpClients.custom();
+    builder.setConnectionManager(connMgr)
+        .setRequestExecutor(new AbfsManagedHttpRequestExecutor(
+            abfsConfiguration.getHttpReadTimeout()))
+        .disableContentCompression()
+        .disableRedirectHandling()
+        .disableAutomaticRetries()
+        .setUserAgent(
+            ""); // SDK will set the user agent header in the pipeline. Don't let Apache waste time

Review Comment:
   So why are we sending an empty string here ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1535516995


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/HttpOperation.java:
##########
@@ -0,0 +1,510 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.List;
+import java.util.Map;
+
+import com.fasterxml.jackson.core.JsonFactory;
+import com.fasterxml.jackson.core.JsonParser;
+import com.fasterxml.jackson.core.JsonToken;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.slf4j.Logger;
+
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.services.AbfsPerfLoggable;
+import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema;
+import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
+
+/**
+ * Base Http operation class for orchestrating server IO calls. Child classes would
+ * define the certain orchestration implementation on the basis of network library used.
+ * <p>
+ * For JDK netlib usage, the child class would be {@link AbfsHttpOperation}. <br>
+ * For ApacheHttpClient netlib usage, the child class would be {@link AbfsAHCHttpOperation}.
+ * </p>
+ */
+public abstract class HttpOperation implements AbfsPerfLoggable {
+
+  private final Logger log;
+
+  private static final int CLEAN_UP_BUFFER_SIZE = 64 * 1024;
+
+  private static final int ONE_THOUSAND = 1000;
+
+  private static final int ONE_MILLION = ONE_THOUSAND * ONE_THOUSAND;
+
+  private String method;
+
+  private URL url;
+
+  private String maskedUrl;
+
+  private String maskedEncodedUrl;
+
+  private int statusCode;
+
+  private String statusDescription;
+
+  private String storageErrorCode = "";
+
+  private String storageErrorMessage = "";
+
+  private String requestId = "";
+
+  private String expectedAppendPos = "";
+
+  private ListResultSchema listResultSchema = null;
+
+  // metrics
+  private int bytesSent;
+
+  private int expectedBytesToBeSent;
+
+  private long bytesReceived;
+
+  private long connectionTimeMs;
+
+  private long sendRequestTimeMs;
+
+  private long recvResponseTimeMs;
+
+  private boolean shouldMask = false;
+
+  public HttpOperation(Logger logger,
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    this.log = logger;
+    this.url = url;
+    this.method = method;
+    this.statusCode = httpStatus;
+  }
+
+  public HttpOperation(final Logger log, final URL url, final String method) {
+    this.log = log;
+    this.url = url;
+    this.method = method;
+  }
+
+  public String getMethod() {
+    return method;
+  }
+
+  public String getHost() {
+    return url.getHost();
+  }
+
+  public int getStatusCode() {
+    return statusCode;
+  }
+
+  public String getStatusDescription() {
+    return statusDescription;
+  }
+
+  public String getStorageErrorCode() {
+    return storageErrorCode;
+  }
+
+  public String getStorageErrorMessage() {
+    return storageErrorMessage;
+  }
+
+  public abstract String getClientRequestId();
+
+  public String getExpectedAppendPos() {
+    return expectedAppendPos;
+  }
+
+  public String getRequestId() {
+    return requestId;
+  }
+
+  public void setMaskForSAS() {
+    shouldMask = true;
+  }
+
+  public int getBytesSent() {
+    return bytesSent;
+  }
+
+  public int getExpectedBytesToBeSent() {
+    return expectedBytesToBeSent;
+  }
+
+  public long getBytesReceived() {
+    return bytesReceived;
+  }
+
+  public URL getUrl() {
+    return url;
+  }
+
+  public ListResultSchema getListResultSchema() {
+    return listResultSchema;
+  }
+
+  public abstract String getResponseHeader(String httpHeader);
+
+  void setExpectedBytesToBeSent(int expectedBytesToBeSent) {
+    this.expectedBytesToBeSent = expectedBytesToBeSent;
+  }
+
+  void setStatusCode(int statusCode) {
+    this.statusCode = statusCode;
+  }
+
+  void setStatusDescription(String statusDescription) {
+    this.statusDescription = statusDescription;
+  }
+
+  void setBytesSent(int bytesSent) {
+    this.bytesSent = bytesSent;
+  }
+
+  void setSendRequestTimeMs(long sendRequestTimeMs) {
+    this.sendRequestTimeMs = sendRequestTimeMs;
+  }
+
+  void setRecvResponseTimeMs(long recvResponseTimeMs) {
+    this.recvResponseTimeMs = recvResponseTimeMs;
+  }
+
+  void setRequestId(String requestId) {
+    this.requestId = requestId;
+  }
+
+  void setConnectionTimeMs(long connectionTimeMs) {
+    this.connectionTimeMs = connectionTimeMs;
+  }
+
+  // Returns a trace message for the request
+  @Override
+  public String toString() {
+    final StringBuilder sb = new StringBuilder();
+    sb.append(statusCode);
+    sb.append(",");
+    sb.append(storageErrorCode);
+    sb.append(",");
+    sb.append(expectedAppendPos);
+    sb.append(",cid=");
+    sb.append(getClientRequestId());
+    sb.append(",rid=");
+    sb.append(requestId);
+    sb.append(",connMs=");
+    sb.append(connectionTimeMs);
+    sb.append(",sendMs=");
+    sb.append(sendRequestTimeMs);
+    sb.append(",recvMs=");
+    sb.append(recvResponseTimeMs);
+    sb.append(",sent=");
+    sb.append(bytesSent);
+    sb.append(",recv=");
+    sb.append(bytesReceived);
+    sb.append(",");
+    sb.append(method);
+    sb.append(",");
+    sb.append(getMaskedUrl());
+    return sb.toString();
+  }
+
+  // Returns a trace message for the ABFS API logging service to consume
+  public String getLogString() {
+
+    final StringBuilder sb = new StringBuilder();
+    sb.append("s=")
+        .append(statusCode)
+        .append(" e=")
+        .append(storageErrorCode)
+        .append(" ci=")
+        .append(getClientRequestId())
+        .append(" ri=")
+        .append(requestId)
+
+        .append(" ct=")
+        .append(connectionTimeMs)
+        .append(" st=")
+        .append(sendRequestTimeMs)
+        .append(" rt=")
+        .append(recvResponseTimeMs)
+
+        .append(" bs=")
+        .append(bytesSent)
+        .append(" br=")
+        .append(bytesReceived)
+        .append(" m=")
+        .append(method)
+        .append(" u=")
+        .append(getMaskedEncodedUrl());
+
+    return sb.toString();
+  }
+
+  public String getMaskedUrl() {
+    if (!shouldMask) {
+      return url.toString();
+    }
+    if (maskedUrl != null) {
+      return maskedUrl;
+    }
+    maskedUrl = UriUtils.getMaskedUrl(url);
+    return maskedUrl;
+  }
+
+  public String getMaskedEncodedUrl() {
+    if (maskedEncodedUrl != null) {
+      return maskedEncodedUrl;
+    }
+    maskedEncodedUrl = UriUtils.encodedUrlStr(getMaskedUrl());
+    return maskedEncodedUrl;
+  }
+
+  public abstract void sendPayload(byte[] buffer, int offset, int length) throws
+      IOException;
+
+  public abstract void processResponse(byte[] buffer,
+      int offset,
+      int length) throws IOException;
+
+  public abstract void setRequestProperty(String key, String value);
+
+  void parseResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    long startTime;
+    if (AbfsHttpConstants.HTTP_METHOD_HEAD.equals(this.method)) {
+      // If it is HEAD, and it is ERROR
+      return;
+    }
+
+    startTime = System.nanoTime();
+
+    if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) {
+      processStorageErrorResponse();
+      this.recvResponseTimeMs += elapsedTimeMs(startTime);
+      String contentLength = getResponseHeader(
+          HttpHeaderConfigurations.CONTENT_LENGTH);
+      if (contentLength != null) {
+        this.bytesReceived = Long.parseLong(contentLength);
+      } else {
+        this.bytesReceived = 0L;
+      }
+
+    } else {
+      // consume the input stream to release resources
+      int totalBytesRead = 0;
+
+      try (InputStream stream = getContentInputStream()) {
+        if (isNullInputStream(stream)) {
+          return;
+        }
+        boolean endOfStream = false;
+
+        // this is a list operation and need to retrieve the data
+        // need a better solution
+        if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method)
+            && buffer == null) {
+          parseListFilesResponse(stream);
+        } else {
+          if (buffer != null) {
+            while (totalBytesRead < length) {
+              int bytesRead = stream.read(buffer, offset + totalBytesRead,
+                  length
+                      - totalBytesRead);
+              if (bytesRead == -1) {
+                endOfStream = true;
+                break;
+              }
+              totalBytesRead += bytesRead;
+            }
+          }
+          if (!endOfStream && stream.read() != -1) {

Review Comment:
   didn't understand the purpose of this read



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1540986948


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsConnectionManager.java:
##########
@@ -0,0 +1,162 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache;
+import org.apache.http.HttpClientConnection;
+import org.apache.http.config.Registry;
+import org.apache.http.config.SocketConfig;
+import org.apache.http.conn.ConnectionPoolTimeoutException;
+import org.apache.http.conn.ConnectionRequest;
+import org.apache.http.conn.HttpClientConnectionManager;
+import org.apache.http.conn.HttpClientConnectionOperator;
+import org.apache.http.conn.routing.HttpRoute;
+import org.apache.http.conn.socket.ConnectionSocketFactory;
+import org.apache.http.impl.conn.DefaultHttpClientConnectionOperator;
+import org.apache.http.impl.conn.ManagedHttpClientConnectionFactory;
+import org.apache.http.protocol.HttpContext;
+import org.apache.http.util.Asserts;
+
+/**
+ * AbfsConnectionManager is a custom implementation of {@link HttpClientConnectionManager}.
+ * This implementation manages connection-pooling heuristics and custom implementation
+ * of {@link ManagedHttpClientConnectionFactory}.
+ */
+public class AbfsConnectionManager implements HttpClientConnectionManager {
+
+  private final KeepAliveCache kac = KeepAliveCache.getInstance();
+
+  private final AbfsConnFactory httpConnectionFactory;
+
+  private final HttpClientConnectionOperator connectionOperator;
+
+  public AbfsConnectionManager(Registry<ConnectionSocketFactory> socketFactoryRegistry,
+      AbfsConnFactory connectionFactory) {
+    this.httpConnectionFactory = connectionFactory;
+    connectionOperator = new DefaultHttpClientConnectionOperator(
+        socketFactoryRegistry, null, null);
+  }
+
+  @Override
+  public ConnectionRequest requestConnection(final HttpRoute route,
+      final Object state) {
+    return new ConnectionRequest() {
+      @Override
+      public HttpClientConnection get(final long timeout,
+          final TimeUnit timeUnit)
+          throws InterruptedException, ExecutionException,
+          ConnectionPoolTimeoutException {
+        try {
+          HttpClientConnection client = kac.get(route);
+          if (client != null && client.isOpen()) {

Review Comment:
   we should also check that the connection has not become stale 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1542741866


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;

Review Comment:
   this variable is not getting used anywhere



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1542745955


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsConnectionManager.java:
##########
@@ -0,0 +1,162 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache;
+import org.apache.http.HttpClientConnection;
+import org.apache.http.config.Registry;
+import org.apache.http.config.SocketConfig;
+import org.apache.http.conn.ConnectionPoolTimeoutException;
+import org.apache.http.conn.ConnectionRequest;
+import org.apache.http.conn.HttpClientConnectionManager;
+import org.apache.http.conn.HttpClientConnectionOperator;
+import org.apache.http.conn.routing.HttpRoute;
+import org.apache.http.conn.socket.ConnectionSocketFactory;
+import org.apache.http.impl.conn.DefaultHttpClientConnectionOperator;
+import org.apache.http.impl.conn.ManagedHttpClientConnectionFactory;
+import org.apache.http.protocol.HttpContext;
+import org.apache.http.util.Asserts;
+
+/**
+ * AbfsConnectionManager is a custom implementation of {@link HttpClientConnectionManager}.
+ * This implementation manages connection-pooling heuristics and custom implementation
+ * of {@link ManagedHttpClientConnectionFactory}.
+ */
+public class AbfsConnectionManager implements HttpClientConnectionManager {
+
+  private final KeepAliveCache kac = KeepAliveCache.getInstance();
+
+  private final AbfsConnFactory httpConnectionFactory;
+
+  private final HttpClientConnectionOperator connectionOperator;
+
+  public AbfsConnectionManager(Registry<ConnectionSocketFactory> socketFactoryRegistry,
+      AbfsConnFactory connectionFactory) {
+    this.httpConnectionFactory = connectionFactory;
+    connectionOperator = new DefaultHttpClientConnectionOperator(
+        socketFactoryRegistry, null, null);
+  }
+
+  @Override
+  public ConnectionRequest requestConnection(final HttpRoute route,
+      final Object state) {
+    return new ConnectionRequest() {
+      @Override
+      public HttpClientConnection get(final long timeout,
+          final TimeUnit timeUnit)
+          throws InterruptedException, ExecutionException,
+          ConnectionPoolTimeoutException {
+        try {
+          HttpClientConnection client = kac.get(route);
+          if (client != null && client.isOpen()) {
+            return client;
+          }
+          return httpConnectionFactory.create(route, null);
+        } catch (IOException ex) {
+          throw new ExecutionException(ex);
+        }
+      }
+
+      @Override
+      public boolean cancel() {
+        return false;
+      }
+    };
+  }
+
+  /**
+   * Releases a connection for reuse. It can be reused only if validDuration is greater than 0.
+   * This method is called by {@link org.apache.http.impl.execchain} internal class `ConnectionHolder`.
+   * If it wants to reuse the connection, it will send a non-zero validDuration, else it will send 0.
+   * @param conn the connection to release
+   * @param newState the new state of the connection
+   * @param validDuration the duration for which the connection is valid
+   * @param timeUnit the time unit for the validDuration
+   */
+  @Override
+  public void releaseConnection(final HttpClientConnection conn,
+      final Object newState,
+      final long validDuration,
+      final TimeUnit timeUnit) {
+    if (validDuration == 0) {
+      return;
+    }
+    if (conn.isOpen() && conn instanceof AbfsManagedApacheHttpConnection) {
+      HttpRoute route = ((AbfsManagedApacheHttpConnection) conn).getHttpRoute();
+      if (route != null) {
+        kac.put(route, conn);
+      }
+    }
+  }
+
+  @Override
+  public void connect(final HttpClientConnection conn,
+      final HttpRoute route,
+      final int connectTimeout,
+      final HttpContext context) throws IOException {
+    Asserts.check(conn instanceof AbfsManagedApacheHttpConnection,
+        "Connection not obtained from this manager");
+    long start = System.currentTimeMillis();
+    connectionOperator.connect((AbfsManagedApacheHttpConnection) conn,
+        route.getTargetHost(), route.getLocalSocketAddress(),
+        connectTimeout, SocketConfig.DEFAULT, context);
+    if (context instanceof AbfsManagedHttpContext) {
+      ((AbfsManagedHttpContext) context).setConnectTime(
+          System.currentTimeMillis() - start);
+    }
+  }
+
+  @Override
+  public void upgrade(final HttpClientConnection conn,
+      final HttpRoute route,
+      final HttpContext context) throws IOException {
+    Asserts.check(conn instanceof AbfsManagedApacheHttpConnection,

Review Comment:
   message not clear as to which manager ?
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1542800588


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/kac/TestApacheClientConnectionPool.java:
##########
@@ -0,0 +1,129 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services.kac;
+
+import java.io.IOException;
+
+import org.junit.Assert;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsTestWithTimeout;
+import org.apache.http.HttpClientConnection;
+import org.apache.http.HttpHost;
+import org.apache.http.conn.routing.HttpRoute;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_MAX_CONN_SYS_PROP;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_MAX_CONN_SYS_PROP;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.KAC_CONN_TTL;
+
+public class TestApacheClientConnectionPool extends
+    AbstractAbfsTestWithTimeout {
+
+  public TestApacheClientConnectionPool() throws Exception {
+    super();
+  }
+
+  @Test
+  public void testBasicPool() throws IOException {
+    System.clearProperty(HTTP_MAX_CONN_SYS_PROP);
+    validatePoolSize(DEFAULT_MAX_CONN_SYS_PROP);
+  }
+
+  @Test
+  public void testSysPropAppliedPool() throws IOException {
+    final String customPoolSize = "10";
+    System.setProperty(HTTP_MAX_CONN_SYS_PROP, customPoolSize);
+    validatePoolSize(Integer.parseInt(customPoolSize));
+  }
+
+  private void validatePoolSize(int size) throws IOException {
+    KeepAliveCache keepAliveCache = KeepAliveCache.getInstance();
+    final HttpRoute routes = new HttpRoute(new HttpHost("localhost"));
+    final HttpClientConnection[] connections = new HttpClientConnection[size * 2];
+
+    for (int i = 0; i < size * 2; i++) {
+      connections[i] = Mockito.mock(HttpClientConnection.class);
+    }
+
+    for (int i = 0; i < size * 2; i++) {
+      keepAliveCache.put(routes, connections[i]);
+    }
+
+    for (int i = size; i < size * 2; i++) {
+      Mockito.verify(connections[i], Mockito.times(1)).close();
+    }
+
+    for (int i = 0; i < size * 2; i++) {
+      if (i < size) {
+        Assert.assertNotNull(keepAliveCache.get(routes));
+      } else {
+        Assert.assertNull(keepAliveCache.get(routes));
+      }
+    }
+    System.clearProperty(HTTP_MAX_CONN_SYS_PROP);
+    keepAliveCache.close();
+  }
+
+  @Test
+  public void testKeepAliveCache() throws IOException {
+    KeepAliveCache keepAliveCache = KeepAliveCache.getInstance();
+    final HttpRoute routes = new HttpRoute(new HttpHost("localhost"));
+    HttpClientConnection connection = Mockito.mock(HttpClientConnection.class);
+
+    keepAliveCache.put(routes, connection);
+
+    Assert.assertNotNull(keepAliveCache.get(routes));
+    keepAliveCache.put(routes, connection);
+
+    final HttpRoute routes1 = new HttpRoute(new HttpHost("localhost1"));
+    Assert.assertNull(keepAliveCache.get(routes1));
+    keepAliveCache.close();
+  }
+
+  @Test
+  public void testKeepAliveCacheCleanup() throws Exception {
+    KeepAliveCache keepAliveCache = KeepAliveCache.getInstance();
+    final HttpRoute routes = new HttpRoute(new HttpHost("localhost"));
+    HttpClientConnection connection = Mockito.mock(HttpClientConnection.class);
+    keepAliveCache.put(routes, connection);
+
+    Thread.sleep(2 * KAC_CONN_TTL);
+    Mockito.verify(connection, Mockito.times(1)).close();
+    Assert.assertNull(keepAliveCache.get(routes));
+    Mockito.verify(connection, Mockito.times(1)).close();
+    keepAliveCache.close();
+  }
+
+  @Test
+  public void testKeepAliveCacheCleanupWithConnections() throws Exception {
+    KeepAliveCache keepAliveCache = KeepAliveCache.getInstance();
+    keepAliveCache.pauseThread();
+    final HttpRoute routes = new HttpRoute(new HttpHost("localhost"));
+    HttpClientConnection connection = Mockito.mock(HttpClientConnection.class);
+    keepAliveCache.put(routes, connection);
+
+    Thread.sleep(2 * KAC_CONN_TTL);
+    Mockito.verify(connection, Mockito.times(0)).close();
+    Assert.assertNull(keepAliveCache.get(routes));
+    Mockito.verify(connection, Mockito.times(1)).close();
+    keepAliveCache.close();
+  }
+}

Review Comment:
   1. You can add a test case where multiple threads are trying to put and get from the cache simultaneously and verify the behaviour.
   2. If we retrieve a connection from the cache and then put it back, will it use the same instance ?
   3. If I put more connections than cache can hold what happens to the remaining connections can be verified ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1546281317


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsHttpClientRequestExecutor.java:
##########
@@ -0,0 +1,178 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.net.URL;
+
+import org.assertj.core.api.Assertions;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore;
+import org.apache.hadoop.fs.azurebfs.utils.TracingContext;
+import org.apache.http.HttpClientConnection;
+import org.apache.http.HttpEntityEnclosingRequest;
+import org.apache.http.HttpException;
+import org.apache.http.HttpRequest;
+
+import static org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_NETWORKING_LIBRARY;
+import static org.apache.hadoop.fs.azurebfs.services.HttpOperationType.APACHE_HTTP_CLIENT;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+public class ITestAbfsHttpClientRequestExecutor extends
+    AbstractAbfsIntegrationTest {
+
+  public ITestAbfsHttpClientRequestExecutor() throws Exception {
+    super();
+  }
+
+  @Test
+  public void testExpect100ContinueHandling() throws Exception {

Review Comment:
   Have extracted some piece of code into methods; have added exhaustive comments now for better code readability.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2029909989

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 22 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  59m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 56s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m 19s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/35/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 9 new + 18 unchanged - 0 fixed = 27 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 21s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 146m  5s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/35/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux b5ac96c77733 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 812855053e3d8e052244b8525bf9313d0f235a13 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/35/testReport/ |
   | Max. process+thread count | 705 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/35/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1549232373


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##########
@@ -363,6 +364,10 @@ public class AbfsConfiguration{
       FS_AZURE_ABFS_ENABLE_CHECKSUM_VALIDATION, DefaultValue = DEFAULT_ENABLE_ABFS_CHECKSUM_VALIDATION)
   private boolean isChecksumValidationEnabled;
 
+  @IntegerConfigurationValidatorAnnotation(ConfigurationKey =
+      FS_AZURE_APACHE_HTTP_CLIENT_MAX_IO_EXCEPTION_RETRIES, DefaultValue = DEFAULT_APACHE_HTTP_CLIENT_MAX_IO_EXCEPTION_RETRIES)

Review Comment:
   3 retries would happen relatively quickely (in case of exponential retry), and it is enough number that a genuine issue can get resolved.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2031353744

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 22 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 51s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  35m 12s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/37/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 10 new + 18 unchanged - 0 fixed = 28 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 30s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 25s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 131m 24s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/37/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux aaae3bc7929d 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0a3bee422675cc56b2f94205ae3a5a0ec40e83e3 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/37/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/37/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2031854632

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 22 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m  6s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m 25s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/38/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 9 new + 18 unchanged - 0 fixed = 27 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 54s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 24s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 128m 59s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/38/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 2521b3c75554 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0eacd161d0104391d1c184131d343d3b64821d53 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/38/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/38/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2009522911

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 20 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 49s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m 10s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/13/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 123 new + 18 unchanged - 0 fixed = 141 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 26s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/13/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 5 new + 15 unchanged - 0 fixed = 20 total (was 15)  |
   | -1 :x: |  javadoc  |   0m 26s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/13/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) |  hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08 with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 generated 5 new + 15 unchanged - 0 fixed = 20 total (was 15)  |
   | -1 :x: |  spotbugs  |   1m  9s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/13/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 18 new + 0 unchanged - 0 fixed = 18 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  36m 35s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 25s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 135m 14s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Dead store to startTime in org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:[line 330] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.isResponseAvailable(int)  At AbfsManagedApacheHttpConnection.java:org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.isResponseAvailable(int)  At AbfsManagedApacheHttpConnection.java:[line 84] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.receiveResponseHeader()  At AbfsManagedApacheHttpConnection.java:org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.receiveResponseHeader()  At AbfsManagedApacheHttpConnection.java:[line 105] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsManagedApacheHttpConnection.java:org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsManagedApacheHttpConnection.java:[line 92] |
   |  |  Unread field:AbfsConnectionManager.java:[line 116] |
   |  |  Unread field:AbfsManagedHttpContext.java:[line 34] |
   |  |  Unread field:AbfsManagedHttpRequestExecutor.java:[line 57] |
   |  |  Unread field:AbfsManagedHttpContext.java:[line 40] |
   |  |  Unused field:AbfsManagedHttpContext.java |
   |  |  Unused field:AbfsManagedHttpContext.java |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE isn't final and can't be protected from malicious code  At KeepAliveCache.java:be protected from malicious code  At KeepAliveCache.java:[line 71] |
   |  |  Exception is caught when Exception is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:[line 131] |
   |  |  Class org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache defines non-transient non-serializable instance field thread  In KeepAliveCache.java:instance field thread  In KeepAliveCache.java |
   |  |  Write to static field org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:[line 47] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup() makes inefficient use of keySet iterator instead of entrySet iterator  At KeepAliveCache.java:keySet iterator instead of entrySet iterator  At KeepAliveCache.java:[line 106] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$ClientVector doesn't override java.util.Vector.equals(Object)  At KeepAliveCache.java:At KeepAliveCache.java:[line 1] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveEntry be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 247-250] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveKey be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 220-239] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/13/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 3b287fd33fc7 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / aa3c78796eb3dcf939f6163af9671f64527439d8 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/13/testReport/ |
   | Max. process+thread count | 724 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/13/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1542920863


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsHttpClientRequestExecutor.java:
##########
@@ -0,0 +1,178 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.net.URL;
+
+import org.assertj.core.api.Assertions;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore;
+import org.apache.hadoop.fs.azurebfs.utils.TracingContext;
+import org.apache.http.HttpClientConnection;
+import org.apache.http.HttpEntityEnclosingRequest;
+import org.apache.http.HttpException;
+import org.apache.http.HttpRequest;
+
+import static org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_NETWORKING_LIBRARY;
+import static org.apache.hadoop.fs.azurebfs.services.HttpOperationType.APACHE_HTTP_CLIENT;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+public class ITestAbfsHttpClientRequestExecutor extends
+    AbstractAbfsIntegrationTest {
+
+  public ITestAbfsHttpClientRequestExecutor() throws Exception {
+    super();
+  }
+
+  @Test
+  public void testExpect100ContinueHandling() throws Exception {

Review Comment:
   The test is too complex, can it be broken down into simpler functions ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1544256904


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {

Review Comment:
   Which context are you referring to. Kindly suggest please.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1544201712


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>

Review Comment:
   It was earlier required when using PoolingHttpClientConnectionManager which require the app to give max connection setting. So, for each fileSystem, we would need to create different httpclient . This is no more required with KAC-adapted connection manager. Have removed this piece of code.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1544325755


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/kac/KeepAliveCache.java:
##########
@@ -0,0 +1,317 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services.kac;
+
+import java.io.IOException;
+import java.io.NotSerializableException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.http.HttpClientConnection;
+import org.apache.http.conn.routing.HttpRoute;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_MAX_CONN_SYS_PROP;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_MAX_CONN_SYS_PROP;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.KAC_CONN_TTL;
+
+/**
+ * Connection-pooling heuristics adapted from JDK's connection pooling `KeepAliveCache`
+ * <p>
+ * Why this implementation is required in comparison to {@link org.apache.http.impl.conn.PoolingHttpClientConnectionManager}
+ * connection-pooling:
+ * <ol>
+ * <li>PoolingHttpClientConnectionManager heuristic caches all the reusable connections it has created.
+ * JDK's implementation only caches limited number of connections. The limit is given by JVM system
+ * property "http.maxConnections". If there is no system-property, it defaults to 5.</li>
+ * <li>In PoolingHttpClientConnectionManager, it expects the application to provide `setMaxPerRoute` and `setMaxTotal`,
+ * which the implementation uses as the total number of connections it can create. For application using ABFS, it is not
+ * feasible to provide a value in the initialisation of the connectionManager. JDK's implementation has no cap on the
+ * number of connections it can create.</li>

Review Comment:
   In the PoolingHttpClientConnectionmanager, it requires application to give maxConn limit, which defines how many opened connections can the manager keep. It caches all the opened connections.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1535437602


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {
+    abfsHttpClientContext = setFinalAbfsClientContext();
+    HttpResponse response = abfsApacheHttpClient.execute(httpRequestBase,
+        abfsHttpClientContext);
+    setConnectionTimeMs(abfsHttpClientContext.getConnectTime());
+    setSendRequestTimeMs(abfsHttpClientContext.getSendTime());
+    setRecvResponseTimeMs(abfsHttpClientContext.getReadTime());
+    return response;
+  }
+
+  private Map<String, List<String>> getResponseHeaders(final HttpResponse httpResponse) {
+    if (httpResponse == null || httpResponse.getAllHeaders() == null) {
+      return new HashMap<>();
+    }
+    Map<String, List<String>> map = new HashMap<>();
+    for (Header header : httpResponse.getAllHeaders()) {
+      map.put(header.getName(), new ArrayList<String>(
+          Collections.singleton(header.getValue())));
+    }
+    return map;
+  }
+
+  @Override
+  public void setRequestProperty(final String key, final String value) {
+    setHeader(key, value);
+  }
+
+  @Override
+  Map<String, List<String>> getRequestProperties() {
+    Map<String, List<String>> map = new HashMap<>();
+    for (AbfsHttpHeader header : requestHeaders) {
+      map.put(header.getName(),
+          new ArrayList<String>() {{
+            add(header.getValue());
+          }});
+    }
+    return map;
+  }
+
+  @Override
+  public String getResponseHeader(final String headerName) {
+    if (httpResponse == null) {
+      return null;
+    }
+    Header header = httpResponse.getFirstHeader(headerName);
+    if (header != null) {
+      return header.getValue();
+    }
+    return null;
+  }
+
+  @Override
+  InputStream getContentInputStream()
+      throws IOException {
+    if (httpResponse == null) {
+      return null;
+    }
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity != null) {
+      return httpResponse.getEntity().getContent();
+    }
+    return null;
+  }
+
+  public void sendPayload(final byte[] buffer,
+      final int offset,
+      final int length)
+      throws IOException {
+    if (!isPayloadRequest) {
+      return;
+    }
+
+    if (HTTP_METHOD_PUT.equals(getMethod())) {
+      httpRequestBase = new HttpPut(getUri());
+    }
+    if (HTTP_METHOD_PATCH.equals(getMethod())) {
+      httpRequestBase = new HttpPatch(getUri());
+    }
+    if (HTTP_METHOD_POST.equals(getMethod())) {
+      httpRequestBase = new HttpPost(getUri());
+    }
+
+    setExpectedBytesToBeSent(length);
+    if (buffer != null) {
+      HttpEntity httpEntity = new ByteArrayEntity(buffer, offset, length,
+          TEXT_PLAIN);
+      ((HttpEntityEnclosingRequestBase) httpRequestBase).setEntity(
+          httpEntity);
+    }
+
+    translateHeaders(httpRequestBase, requestHeaders);
+    try {
+      httpResponse = executeRequest();
+    } catch (AbfsApacheHttpExpect100Exception ex) {
+      LOG.debug(
+          "Getting output stream failed with expect header enabled, returning back ",
+          ex);
+      connectionDisconnectedOnError = true;
+      httpResponse = ex.getHttpResponse();
+      abfsApacheHttpExpect100Exception = ex;
+    } finally {
+      if (!connectionDisconnectedOnError
+          && httpRequestBase instanceof HttpEntityEnclosingRequestBase) {

Review Comment:
   close the httpRequestBase in the finally block accordingly



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1535496869


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsApacheHttpClient.java:
##########
@@ -0,0 +1,93 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.config.RequestConfig;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.config.Registry;
+import org.apache.http.config.RegistryBuilder;
+import org.apache.http.conn.socket.ConnectionSocketFactory;
+import org.apache.http.conn.socket.PlainConnectionSocketFactory;
+import org.apache.http.conn.ssl.SSLConnectionSocketFactory;
+import org.apache.http.impl.client.CloseableHttpClient;
+import org.apache.http.impl.client.HttpClientBuilder;
+import org.apache.http.impl.client.HttpClients;
+
+import static org.apache.http.conn.ssl.SSLConnectionSocketFactory.getDefaultHostnameVerifier;
+
+public class AbfsApacheHttpClient {
+  private final CloseableHttpClient httpClient;
+
+  private final AbfsConfiguration abfsConfiguration;
+
+  public AbfsApacheHttpClient(DelegatingSSLSocketFactory delegatingSSLSocketFactory,
+      final AbfsConfiguration abfsConfiguration) {
+    this.abfsConfiguration = abfsConfiguration;
+    final AbfsConnectionManager connMgr = new AbfsConnectionManager(
+        createSocketFactoryRegistry(
+            new SSLConnectionSocketFactory(delegatingSSLSocketFactory,
+                getDefaultHostnameVerifier())),
+        new org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory());
+    final HttpClientBuilder builder = HttpClients.custom();
+    builder.setConnectionManager(connMgr)
+        .setRequestExecutor(new AbfsManagedHttpRequestExecutor(
+            abfsConfiguration.getHttpReadTimeout()))
+        .disableContentCompression()
+        .disableRedirectHandling()
+        .disableAutomaticRetries()
+        .setUserAgent(
+            ""); // SDK will set the user agent header in the pipeline. Don't let Apache waste time
+    httpClient = builder.build();
+  }
+
+  public void close() throws IOException {
+    if (httpClient != null) {
+      httpClient.close();
+    }
+  }
+
+  public HttpResponse execute(HttpRequestBase httpRequest,
+      final AbfsManagedHttpContext abfsHttpClientContext) throws IOException {
+    RequestConfig.Builder requestConfigBuilder = RequestConfig
+        .custom()
+        .setConnectTimeout(abfsConfiguration.getHttpConnectionTimeout())
+        .setSocketTimeout(abfsConfiguration.getHttpReadTimeout());
+    httpRequest.setConfig(requestConfigBuilder.build());
+    return httpClient.execute(httpRequest, abfsHttpClientContext);
+  }
+
+
+  private static Registry<ConnectionSocketFactory> createSocketFactoryRegistry(
+      ConnectionSocketFactory sslSocketFactory) {
+    if (sslSocketFactory == null) {
+      return RegistryBuilder.<ConnectionSocketFactory>create()
+          .register("http", PlainConnectionSocketFactory.getSocketFactory())

Review Comment:
   http and https should be read from constant strings



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1549304525


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/kac/KeepAliveCache.java:
##########
@@ -0,0 +1,345 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services.kac;

Review Comment:
   remvoved kac package.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1547523856


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsManagedHttpContext.java:
##########
@@ -0,0 +1,70 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.http.HttpClientConnection;
+import org.apache.http.client.protocol.HttpClientContext;
+
+public class AbfsManagedHttpContext extends HttpClientContext {

Review Comment:
   taken.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1547523599


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/HttpOperationType.java:
##########
@@ -0,0 +1,24 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;

Review Comment:
   taken.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1547565825


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {
+    abfsHttpClientContext = setFinalAbfsClientContext();
+    HttpResponse response = abfsApacheHttpClient.execute(httpRequestBase,
+        abfsHttpClientContext);
+    setConnectionTimeMs(abfsHttpClientContext.getConnectTime());
+    setSendRequestTimeMs(abfsHttpClientContext.getSendTime());
+    setRecvResponseTimeMs(abfsHttpClientContext.getReadTime());
+    return response;
+  }
+
+  private Map<String, List<String>> getResponseHeaders(final HttpResponse httpResponse) {
+    if (httpResponse == null || httpResponse.getAllHeaders() == null) {

Review Comment:
   Response headers are mapped to a response. Each response would have different response headers. We cant have same map for all response for two reasons:
   1. each response -> different set of response headers (key/val)
   2. parallel response handling can not use same map.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2031960607

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 22 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 49s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  35m 10s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/39/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 9 new + 18 unchanged - 0 fixed = 27 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 26s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 130m 47s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/39/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux e98b32386339 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 99b2f496cbf3df06361f699569c24b904b676b44 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/39/testReport/ |
   | Max. process+thread count | 647 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/39/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2031978575

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 21 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 55s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m 17s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/40/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 9 new + 18 unchanged - 0 fixed = 27 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m  6s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 27s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 129m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/40/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 4a364f718a9b 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 266caf2fd0d92a13de7301bdf361810cd5ea82ca |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/40/testReport/ |
   | Max. process+thread count | 700 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/40/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1545990964


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/kac/TestApacheClientConnectionPool.java:
##########
@@ -0,0 +1,129 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services.kac;
+
+import java.io.IOException;
+
+import org.junit.Assert;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsTestWithTimeout;
+import org.apache.http.HttpClientConnection;
+import org.apache.http.HttpHost;
+import org.apache.http.conn.routing.HttpRoute;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_MAX_CONN_SYS_PROP;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_MAX_CONN_SYS_PROP;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.KAC_CONN_TTL;
+
+public class TestApacheClientConnectionPool extends
+    AbstractAbfsTestWithTimeout {
+
+  public TestApacheClientConnectionPool() throws Exception {
+    super();
+  }
+
+  @Test
+  public void testBasicPool() throws IOException {
+    System.clearProperty(HTTP_MAX_CONN_SYS_PROP);
+    validatePoolSize(DEFAULT_MAX_CONN_SYS_PROP);
+  }
+
+  @Test
+  public void testSysPropAppliedPool() throws IOException {
+    final String customPoolSize = "10";
+    System.setProperty(HTTP_MAX_CONN_SYS_PROP, customPoolSize);
+    validatePoolSize(Integer.parseInt(customPoolSize));
+  }
+
+  private void validatePoolSize(int size) throws IOException {
+    KeepAliveCache keepAliveCache = KeepAliveCache.getInstance();
+    final HttpRoute routes = new HttpRoute(new HttpHost("localhost"));
+    final HttpClientConnection[] connections = new HttpClientConnection[size * 2];
+
+    for (int i = 0; i < size * 2; i++) {
+      connections[i] = Mockito.mock(HttpClientConnection.class);
+    }
+
+    for (int i = 0; i < size * 2; i++) {
+      keepAliveCache.put(routes, connections[i]);
+    }
+
+    for (int i = size; i < size * 2; i++) {
+      Mockito.verify(connections[i], Mockito.times(1)).close();
+    }
+
+    for (int i = 0; i < size * 2; i++) {
+      if (i < size) {
+        Assert.assertNotNull(keepAliveCache.get(routes));
+      } else {
+        Assert.assertNull(keepAliveCache.get(routes));
+      }
+    }
+    System.clearProperty(HTTP_MAX_CONN_SYS_PROP);
+    keepAliveCache.close();
+  }
+
+  @Test
+  public void testKeepAliveCache() throws IOException {
+    KeepAliveCache keepAliveCache = KeepAliveCache.getInstance();
+    final HttpRoute routes = new HttpRoute(new HttpHost("localhost"));
+    HttpClientConnection connection = Mockito.mock(HttpClientConnection.class);
+
+    keepAliveCache.put(routes, connection);
+
+    Assert.assertNotNull(keepAliveCache.get(routes));
+    keepAliveCache.put(routes, connection);
+
+    final HttpRoute routes1 = new HttpRoute(new HttpHost("localhost1"));

Review Comment:
   We have not added this in cache. It is just a key created to check that KeepAliveCache returns back only correct connections which are attached to a given key. So here, there is no connections attached to routes1. So, when we do get() with route1, we would get null.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1535430472


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {
+    abfsHttpClientContext = setFinalAbfsClientContext();
+    HttpResponse response = abfsApacheHttpClient.execute(httpRequestBase,
+        abfsHttpClientContext);
+    setConnectionTimeMs(abfsHttpClientContext.getConnectTime());
+    setSendRequestTimeMs(abfsHttpClientContext.getSendTime());
+    setRecvResponseTimeMs(abfsHttpClientContext.getReadTime());
+    return response;
+  }
+
+  private Map<String, List<String>> getResponseHeaders(final HttpResponse httpResponse) {
+    if (httpResponse == null || httpResponse.getAllHeaders() == null) {

Review Comment:
   Same map can be reused to improve memory usage instead of creating a new one for each response



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2015155762

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 22 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 53s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 23s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 45s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/23/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 2 new + 18 unchanged - 0 fixed = 20 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 20s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 25s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 127m 39s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/23/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 0f7b337bbe95 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a8a5194aaf0375d438da43b18673df630e47368b |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/23/testReport/ |
   | Max. process+thread count | 684 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/23/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2005760178

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 18 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m 14s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m  9s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  35m 30s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 22s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/4/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 140 new + 18 unchanged - 0 fixed = 158 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 27s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/4/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15)  |
   | -1 :x: |  javadoc  |   0m 25s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/4/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) |  hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08 with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15)  |
   | -1 :x: |  spotbugs  |   1m  7s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/4/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 18 new + 0 unchanged - 0 fixed = 18 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  39m 16s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 138m 48s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Unread field:AbfsConnectionManager.java:[line 113] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 63] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 88] |
   |  |  Unread field:AbfsApacheHttpClient.java:[line 68] |
   |  |  Unused field:AbfsApacheHttpClient.java |
   |  |  Unused field:AbfsApacheHttpClient.java |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.isResponseAvailable(int)  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.isResponseAvailable(int)  At AbfsConnFactory.java:[line 92] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.receiveResponseHeader()  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.receiveResponseHeader()  At AbfsConnFactory.java:[line 113] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsConnFactory.java:org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory$AbfsApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsConnFactory.java:[line 100] |
   |  |  Dead store to startTime in org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:[line 337] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE isn't final and can't be protected from malicious code  At KeepAliveCache.java:be protected from malicious code  At KeepAliveCache.java:[line 71] |
   |  |  Exception is caught when Exception is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:[line 131] |
   |  |  Class org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache defines non-transient non-serializable instance field thread  In KeepAliveCache.java:instance field thread  In KeepAliveCache.java |
   |  |  Write to static field org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:[line 47] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup() makes inefficient use of keySet iterator instead of entrySet iterator  At KeepAliveCache.java:keySet iterator instead of entrySet iterator  At KeepAliveCache.java:[line 106] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$ClientVector doesn't override java.util.Vector.equals(Object)  At KeepAliveCache.java:At KeepAliveCache.java:[line 1] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveEntry be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 247-250] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveKey be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 220-239] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/4/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux ccc61cd01e42 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 110b9b61ce73cecd0222d8971311b86be1a32939 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/4/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1540806011


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {
+    abfsHttpClientContext = setFinalAbfsClientContext();
+    HttpResponse response = abfsApacheHttpClient.execute(httpRequestBase,
+        abfsHttpClientContext);
+    setConnectionTimeMs(abfsHttpClientContext.getConnectTime());
+    setSendRequestTimeMs(abfsHttpClientContext.getSendTime());
+    setRecvResponseTimeMs(abfsHttpClientContext.getReadTime());
+    return response;
+  }
+
+  private Map<String, List<String>> getResponseHeaders(final HttpResponse httpResponse) {
+    if (httpResponse == null || httpResponse.getAllHeaders() == null) {
+      return new HashMap<>();
+    }
+    Map<String, List<String>> map = new HashMap<>();
+    for (Header header : httpResponse.getAllHeaders()) {
+      map.put(header.getName(), new ArrayList<String>(
+          Collections.singleton(header.getValue())));
+    }
+    return map;
+  }
+
+  @Override
+  public void setRequestProperty(final String key, final String value) {
+    setHeader(key, value);
+  }
+
+  @Override
+  Map<String, List<String>> getRequestProperties() {
+    Map<String, List<String>> map = new HashMap<>();
+    for (AbfsHttpHeader header : requestHeaders) {
+      map.put(header.getName(),
+          new ArrayList<String>() {{
+            add(header.getValue());
+          }});
+    }
+    return map;
+  }
+
+  @Override
+  public String getResponseHeader(final String headerName) {
+    if (httpResponse == null) {
+      return null;
+    }
+    Header header = httpResponse.getFirstHeader(headerName);
+    if (header != null) {
+      return header.getValue();
+    }
+    return null;
+  }
+
+  @Override
+  InputStream getContentInputStream()

Review Comment:
   @Override
   InputStream getContentInputStream() throws IOException {
       if (httpResponse != null && httpResponse.getEntity() != null) {
           return httpResponse.getEntity().getContent();
       }
       return null;
   } can be simplified



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2009123337

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 19 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  32m 46s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m  7s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/12/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 135 new + 18 unchanged - 0 fixed = 153 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 26s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/12/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 5 new + 15 unchanged - 0 fixed = 20 total (was 15)  |
   | -1 :x: |  javadoc  |   0m 25s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/12/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) |  hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08 with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 generated 5 new + 15 unchanged - 0 fixed = 20 total (was 15)  |
   | -1 :x: |  spotbugs  |   1m 10s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/12/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 18 new + 0 unchanged - 0 fixed = 18 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  32m 41s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 27s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 126m 38s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Dead store to startTime in org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:[line 337] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.isResponseAvailable(int)  At AbfsManagedApacheHttpConnection.java:org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.isResponseAvailable(int)  At AbfsManagedApacheHttpConnection.java:[line 84] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.receiveResponseHeader()  At AbfsManagedApacheHttpConnection.java:org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.receiveResponseHeader()  At AbfsManagedApacheHttpConnection.java:[line 105] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsManagedApacheHttpConnection.java:org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsManagedApacheHttpConnection.java:[line 92] |
   |  |  Unread field:AbfsConnectionManager.java:[line 116] |
   |  |  Unread field:AbfsManagedHttpContext.java:[line 34] |
   |  |  Unread field:AbfsManagedHttpRequestExecutor.java:[line 57] |
   |  |  Unread field:AbfsManagedHttpContext.java:[line 40] |
   |  |  Unused field:AbfsManagedHttpContext.java |
   |  |  Unused field:AbfsManagedHttpContext.java |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE isn't final and can't be protected from malicious code  At KeepAliveCache.java:be protected from malicious code  At KeepAliveCache.java:[line 71] |
   |  |  Exception is caught when Exception is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:[line 131] |
   |  |  Class org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache defines non-transient non-serializable instance field thread  In KeepAliveCache.java:instance field thread  In KeepAliveCache.java |
   |  |  Write to static field org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:[line 47] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup() makes inefficient use of keySet iterator instead of entrySet iterator  At KeepAliveCache.java:keySet iterator instead of entrySet iterator  At KeepAliveCache.java:[line 106] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$ClientVector doesn't override java.util.Vector.equals(Object)  At KeepAliveCache.java:At KeepAliveCache.java:[line 1] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveEntry be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 247-250] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveKey be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 220-239] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/12/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 7cdcfeb86384 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3cb52f4c1c3bb146a6762a6a279f83866cd9b249 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/12/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/12/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2012250055

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 21 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  51m 55s |  |  trunk passed  |
   | -1 :x: |  compile  |   0m 32s | [/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in trunk failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  compile  |   0m 30s | [/branch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in trunk failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -0 :warning: |  checkstyle  |   0m 28s | [/buildtool-branch-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/artifact/out/buildtool-branch-checkstyle-hadoop-tools_hadoop-azure.txt) |  The patch fails to run checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 30s | [/branch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/artifact/out/branch-mvnsite-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 30s | [/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in trunk failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javadoc  |   0m 30s | [/branch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in trunk failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  spotbugs  |   0m 30s | [/branch-spotbugs-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/artifact/out/branch-spotbugs-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in trunk failed.  |
   | -1 :x: |  shadedclient  |   4m  9s |  |  branch has errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |   4m 39s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 32s | [/patch-mvninstall-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 23s | [/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javac  |   0m 23s | [/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  compile  |   0m 23s | [/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  javac  |   0m 23s | [/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/buildtool-patch-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/artifact/out/buildtool-patch-checkstyle-hadoop-tools_hadoop-azure.txt) |  The patch fails to run checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 23s | [/patch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | -1 :x: |  javadoc  |   0m 23s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javadoc  |   0m 38s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  spotbugs  |   0m 24s | [/patch-spotbugs-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/artifact/out/patch-spotbugs-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  shadedclient  |   4m 44s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | -1 :x: |  unit  |   1m  3s | [/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not generate ASF License warnings.  |
   |  |   |  71m  3s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux abe73f7f17da 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1022f1b397f6217f7944736b5e0c58bb51e6b96b |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/testReport/ |
   | Max. process+thread count | 149 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/20/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2009059828

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 19 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 59s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 11s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 32s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/11/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 141 new + 18 unchanged - 0 fixed = 159 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 26s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/11/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 5 new + 15 unchanged - 0 fixed = 20 total (was 15)  |
   | -1 :x: |  javadoc  |   0m 25s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/11/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 generated 5 new + 15 unchanged - 0 fixed = 20 total (was 15)  |
   | -1 :x: |  spotbugs  |   1m  9s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/11/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 18 new + 0 unchanged - 0 fixed = 18 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  33m 22s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 129m 29s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Dead store to startTime in org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processConnHeadersAndInputStreams(byte[], int, int)  At AbfsHttpOperation.java:[line 337] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.isResponseAvailable(int)  At AbfsManagedApacheHttpConnection.java:org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.isResponseAvailable(int)  At AbfsManagedApacheHttpConnection.java:[line 84] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.receiveResponseHeader()  At AbfsManagedApacheHttpConnection.java:org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.receiveResponseHeader()  At AbfsManagedApacheHttpConnection.java:[line 105] |
   |  |  Dead store to start in org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsManagedApacheHttpConnection.java:org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection.sendRequestHeader(HttpRequest)  At AbfsManagedApacheHttpConnection.java:[line 92] |
   |  |  Unread field:AbfsConnectionManager.java:[line 119] |
   |  |  Unread field:AbfsManagedHttpContext.java:[line 34] |
   |  |  Unread field:AbfsManagedHttpRequestExecutor.java:[line 57] |
   |  |  Unread field:AbfsManagedHttpContext.java:[line 40] |
   |  |  Unused field:AbfsManagedHttpContext.java |
   |  |  Unused field:AbfsManagedHttpContext.java |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE isn't final and can't be protected from malicious code  At KeepAliveCache.java:be protected from malicious code  At KeepAliveCache.java:[line 71] |
   |  |  Exception is caught when Exception is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:is not thrown in org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup()  At KeepAliveCache.java:[line 131] |
   |  |  Class org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache defines non-transient non-serializable instance field thread  In KeepAliveCache.java:instance field thread  In KeepAliveCache.java |
   |  |  Write to static field org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.INSTANCE from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:from instance method org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.close()  At KeepAliveCache.java:[line 47] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache.kacCleanup() makes inefficient use of keySet iterator instead of entrySet iterator  At KeepAliveCache.java:keySet iterator instead of entrySet iterator  At KeepAliveCache.java:[line 106] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$ClientVector doesn't override java.util.Vector.equals(Object)  At KeepAliveCache.java:At KeepAliveCache.java:[line 1] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveEntry be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 247-250] |
   |  |  Should org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveKey be a _static_ inner class?  At KeepAliveCache.java:inner class?  At KeepAliveCache.java:[lines 220-239] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/11/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 80eb323e02a7 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 15c380010cefa2501313b5509b8658411ac883e4 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/11/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/11/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2014472463

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 22 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 57s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 50s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  39m 11s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/21/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 5 new + 18 unchanged - 0 fixed = 23 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 26s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/21/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) |  hadoop-azure in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javadoc  |   0m 25s | [/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/21/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) |  hadoop-azure in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  spotbugs  |   1m 10s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/21/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  37m 37s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 28s |  |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 137m  6s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  org.apache.hadoop.fs.azurebfs.services.AbfsManagedApacheHttpConnection defines equals and uses Object.hashCode()  At AbfsManagedApacheHttpConnection.java:Object.hashCode()  At AbfsManagedApacheHttpConnection.java:[lines 186-190] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$ClientVector defines equals but not hashCode  At KeepAliveCache.java:hashCode  At KeepAliveCache.java:[line 245] |
   |  |  org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache$KeepAliveEntry defines equals and uses Object.hashCode()  At KeepAliveCache.java:Object.hashCode()  At KeepAliveCache.java:[lines 302-306] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/21/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6bf2a4bdfefb 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / cecedef8796e884ceccd618139b6195b5e90a1eb |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/21/testReport/ |
   | Max. process+thread count | 613 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/21/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1544253009


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsConnectionManager.java:
##########
@@ -0,0 +1,162 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache;
+import org.apache.http.HttpClientConnection;
+import org.apache.http.config.Registry;
+import org.apache.http.config.SocketConfig;
+import org.apache.http.conn.ConnectionPoolTimeoutException;
+import org.apache.http.conn.ConnectionRequest;
+import org.apache.http.conn.HttpClientConnectionManager;
+import org.apache.http.conn.HttpClientConnectionOperator;
+import org.apache.http.conn.routing.HttpRoute;
+import org.apache.http.conn.socket.ConnectionSocketFactory;
+import org.apache.http.impl.conn.DefaultHttpClientConnectionOperator;
+import org.apache.http.impl.conn.ManagedHttpClientConnectionFactory;
+import org.apache.http.protocol.HttpContext;
+import org.apache.http.util.Asserts;
+
+/**
+ * AbfsConnectionManager is a custom implementation of {@link HttpClientConnectionManager}.
+ * This implementation manages connection-pooling heuristics and custom implementation
+ * of {@link ManagedHttpClientConnectionFactory}.
+ */
+public class AbfsConnectionManager implements HttpClientConnectionManager {
+
+  private final KeepAliveCache kac = KeepAliveCache.getInstance();
+
+  private final AbfsConnFactory httpConnectionFactory;
+
+  private final HttpClientConnectionOperator connectionOperator;
+
+  public AbfsConnectionManager(Registry<ConnectionSocketFactory> socketFactoryRegistry,
+      AbfsConnFactory connectionFactory) {
+    this.httpConnectionFactory = connectionFactory;
+    connectionOperator = new DefaultHttpClientConnectionOperator(
+        socketFactoryRegistry, null, null);
+  }
+
+  @Override
+  public ConnectionRequest requestConnection(final HttpRoute route,
+      final Object state) {
+    return new ConnectionRequest() {
+      @Override
+      public HttpClientConnection get(final long timeout,
+          final TimeUnit timeUnit)
+          throws InterruptedException, ExecutionException,
+          ConnectionPoolTimeoutException {
+        try {
+          HttpClientConnection client = kac.get(route);
+          if (client != null && client.isOpen()) {

Review Comment:
   Very good point!
   
   This would get checked though internally in MainClientExec, but better prevent earlier.
   
   Would be taken!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2029430513

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 22 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 16s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 46s |  |  branch has no errors when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m  8s |  |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/31/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) |  hadoop-tools/hadoop-azure: The patch generated 9 new + 18 unchanged - 0 fixed = 27 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m  7s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/31/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  33m 28s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | -1 :x: |  unit  |   2m 37s | [/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/31/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt) |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 130m 14s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Class org.apache.hadoop.fs.azurebfs.services.kac.KeepAliveCache defines non-transient non-serializable instance field keepAliveTimer  In KeepAliveCache.java:instance field keepAliveTimer  In KeepAliveCache.java |
   | Failed junit tests | hadoop.fs.azurebfs.services.kac.TestApacheClientConnectionPool |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/31/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6633 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6f8f1c5a049e 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 80e54bdc543d79bbe8473d17f18e46e5406bc860 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/31/testReport/ |
   | Max. process+thread count | 551 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/31/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1547575256


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {
+    abfsHttpClientContext = setFinalAbfsClientContext();
+    HttpResponse response = abfsApacheHttpClient.execute(httpRequestBase,
+        abfsHttpClientContext);
+    setConnectionTimeMs(abfsHttpClientContext.getConnectTime());
+    setSendRequestTimeMs(abfsHttpClientContext.getSendTime());
+    setRecvResponseTimeMs(abfsHttpClientContext.getReadTime());
+    return response;
+  }
+
+  private Map<String, List<String>> getResponseHeaders(final HttpResponse httpResponse) {
+    if (httpResponse == null || httpResponse.getAllHeaders() == null) {
+      return new HashMap<>();
+    }
+    Map<String, List<String>> map = new HashMap<>();
+    for (Header header : httpResponse.getAllHeaders()) {
+      map.put(header.getName(), new ArrayList<String>(
+          Collections.singleton(header.getValue())));
+    }
+    return map;
+  }
+
+  @Override
+  public void setRequestProperty(final String key, final String value) {
+    setHeader(key, value);
+  }
+
+  @Override
+  Map<String, List<String>> getRequestProperties() {
+    Map<String, List<String>> map = new HashMap<>();
+    for (AbfsHttpHeader header : requestHeaders) {
+      map.put(header.getName(),
+          new ArrayList<String>() {{
+            add(header.getValue());
+          }});
+    }
+    return map;
+  }
+
+  @Override
+  public String getResponseHeader(final String headerName) {
+    if (httpResponse == null) {
+      return null;
+    }
+    Header header = httpResponse.getFirstHeader(headerName);
+    if (header != null) {
+      return header.getValue();
+    }
+    return null;
+  }
+
+  @Override
+  InputStream getContentInputStream()
+      throws IOException {
+    if (httpResponse == null) {
+      return null;
+    }
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity != null) {
+      return httpResponse.getEntity().getContent();
+    }
+    return null;
+  }
+
+  public void sendPayload(final byte[] buffer,
+      final int offset,
+      final int length)
+      throws IOException {
+    if (!isPayloadRequest) {
+      return;
+    }
+
+    if (HTTP_METHOD_PUT.equals(getMethod())) {
+      httpRequestBase = new HttpPut(getUri());
+    }
+    if (HTTP_METHOD_PATCH.equals(getMethod())) {
+      httpRequestBase = new HttpPatch(getUri());
+    }
+    if (HTTP_METHOD_POST.equals(getMethod())) {
+      httpRequestBase = new HttpPost(getUri());
+    }
+
+    setExpectedBytesToBeSent(length);
+    if (buffer != null) {
+      HttpEntity httpEntity = new ByteArrayEntity(buffer, offset, length,
+          TEXT_PLAIN);
+      ((HttpEntityEnclosingRequestBase) httpRequestBase).setEntity(
+          httpEntity);
+    }
+
+    translateHeaders(httpRequestBase, requestHeaders);
+    try {
+      httpResponse = executeRequest();
+    } catch (AbfsApacheHttpExpect100Exception ex) {
+      LOG.debug(
+          "Getting output stream failed with expect header enabled, returning back ",
+          ex);
+      connectionDisconnectedOnError = true;
+      httpResponse = ex.getHttpResponse();
+      abfsApacheHttpExpect100Exception = ex;
+    } finally {
+      if (!connectionDisconnectedOnError
+          && httpRequestBase instanceof HttpEntityEnclosingRequestBase) {
+        setBytesSent(length);
+      }
+    }
+  }
+
+  private void prepareRequest() throws IOException {
+    if (HTTP_METHOD_GET.equals(getMethod())) {
+      httpRequestBase = new HttpGet(getUri());
+    }
+    if (HTTP_METHOD_DELETE.equals(getMethod())) {
+      httpRequestBase = new HttpDelete(getUri());
+    }
+    if (HTTP_METHOD_HEAD.equals(getMethod())) {
+      httpRequestBase = new HttpHead(getUri());
+    }
+    translateHeaders(httpRequestBase, requestHeaders);
+  }
+
+  private URI getUri() throws IOException {
+    try {
+      return getUrl().toURI();
+    } catch (URISyntaxException e) {
+      throw new IOException(e);
+    }
+  }
+
+  private void translateHeaders(final HttpRequestBase httpRequestBase,
+      final List<AbfsHttpHeader> requestHeaders) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      httpRequestBase.setHeader(header.getName(), header.getValue());
+    }
+  }
+
+  public void setHeader(String name, String val) {
+    requestHeaders.add(new AbfsHttpHeader(name, val));
+  }
+
+  @Override
+  public String getRequestProperty(String name) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(name)) {
+        return header.getValue();
+      }
+    }
+    return "";

Review Comment:
   taken.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1547567832


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {
+    abfsHttpClientContext = setFinalAbfsClientContext();
+    HttpResponse response = abfsApacheHttpClient.execute(httpRequestBase,
+        abfsHttpClientContext);
+    setConnectionTimeMs(abfsHttpClientContext.getConnectTime());
+    setSendRequestTimeMs(abfsHttpClientContext.getSendTime());
+    setRecvResponseTimeMs(abfsHttpClientContext.getReadTime());
+    return response;
+  }
+
+  private Map<String, List<String>> getResponseHeaders(final HttpResponse httpResponse) {
+    if (httpResponse == null || httpResponse.getAllHeaders() == null) {
+      return new HashMap<>();
+    }
+    Map<String, List<String>> map = new HashMap<>();
+    for (Header header : httpResponse.getAllHeaders()) {
+      map.put(header.getName(), new ArrayList<String>(
+          Collections.singleton(header.getValue())));
+    }
+    return map;
+  }
+
+  @Override
+  public void setRequestProperty(final String key, final String value) {
+    setHeader(key, value);
+  }
+
+  @Override
+  Map<String, List<String>> getRequestProperties() {
+    Map<String, List<String>> map = new HashMap<>();
+    for (AbfsHttpHeader header : requestHeaders) {
+      map.put(header.getName(),
+          new ArrayList<String>() {{
+            add(header.getValue());
+          }});
+    }
+    return map;
+  }
+
+  @Override
+  public String getResponseHeader(final String headerName) {
+    if (httpResponse == null) {
+      return null;
+    }
+    Header header = httpResponse.getFirstHeader(headerName);
+    if (header != null) {
+      return header.getValue();
+    }
+    return null;
+  }
+
+  @Override
+  InputStream getContentInputStream()
+      throws IOException {
+    if (httpResponse == null) {
+      return null;
+    }
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity != null) {
+      return httpResponse.getEntity().getContent();
+    }
+    return null;
+  }
+
+  public void sendPayload(final byte[] buffer,
+      final int offset,
+      final int length)
+      throws IOException {
+    if (!isPayloadRequest) {
+      return;
+    }
+
+    if (HTTP_METHOD_PUT.equals(getMethod())) {

Review Comment:
   taken.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anujmodi2021 (via GitHub)" <gi...@apache.org>.
anujmodi2021 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1542292161


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##########
@@ -842,6 +847,17 @@ public DelegatingSSLSocketFactory.SSLChannelMode getPreferredSSLFactoryOption()
     return getEnum(FS_AZURE_SSL_CHANNEL_MODE_KEY, DEFAULT_FS_AZURE_SSL_CHANNEL_MODE);
   }
 
+  /**
+   * @return Config to select netlib for server communication.
+   */
+  public HttpOperationType getPreferredHttpOperationType() {
+    return getEnum(FS_AZURE_NETWORKING_LIBRARY, DEFAULT_NETWORKING_LIBRARY);

Review Comment:
   Default we want to keep Apache only??



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##########
@@ -363,6 +364,10 @@ public class AbfsConfiguration{
       FS_AZURE_ABFS_ENABLE_CHECKSUM_VALIDATION, DefaultValue = DEFAULT_ENABLE_ABFS_CHECKSUM_VALIDATION)
   private boolean isChecksumValidationEnabled;
 
+  @IntegerConfigurationValidatorAnnotation(ConfigurationKey =
+      FS_AZURE_APACHE_HTTP_CLIENT_MAX_IO_EXCEPTION_RETRIES, DefaultValue = DEFAULT_APACHE_HTTP_CLIENT_MAX_IO_EXCEPTION_RETRIES)

Review Comment:
   How do we arrive at this default value??
   By intuition this looks good though.



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {

Review Comment:
   Do we need this to be synchronized as well??



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {

Review Comment:
   Nit: I would suggest some changes in variable names here.
   There are two types of client involved.
   ClientId is AbfsClient's ID
   and client is AbfsApacheHttpClient.
   
   
   May be use abfsClientId and apacheHttpClient as variable name to avoid any confusion.



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {

Review Comment:
   Can we make this whole function synchronized to avoid checking map twice??



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsManagedHttpContext.java:
##########
@@ -0,0 +1,70 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.http.HttpClientConnection;
+import org.apache.http.client.protocol.HttpClientContext;
+
+public class AbfsManagedHttpContext extends HttpClientContext {

Review Comment:
   nit: Class name should be AbfsManaged**HttpClientContext**



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/AbfsApacheHttpExpect100Exception.java:
##########
@@ -0,0 +1,36 @@
+/**

Review Comment:
   Why do we need a different Expect100 Exception Class??
   Can we use IOException with "100" as status code?



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/kac/KeepAliveCache.java:
##########
@@ -0,0 +1,317 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services.kac;
+
+import java.io.IOException;
+import java.io.NotSerializableException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.http.HttpClientConnection;
+import org.apache.http.conn.routing.HttpRoute;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_MAX_CONN_SYS_PROP;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_MAX_CONN_SYS_PROP;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.KAC_CONN_TTL;
+
+/**
+ * Connection-pooling heuristics adapted from JDK's connection pooling `KeepAliveCache`
+ * <p>
+ * Why this implementation is required in comparison to {@link org.apache.http.impl.conn.PoolingHttpClientConnectionManager}
+ * connection-pooling:
+ * <ol>
+ * <li>PoolingHttpClientConnectionManager heuristic caches all the reusable connections it has created.
+ * JDK's implementation only caches limited number of connections. The limit is given by JVM system
+ * property "http.maxConnections". If there is no system-property, it defaults to 5.</li>
+ * <li>In PoolingHttpClientConnectionManager, it expects the application to provide `setMaxPerRoute` and `setMaxTotal`,
+ * which the implementation uses as the total number of connections it can create. For application using ABFS, it is not
+ * feasible to provide a value in the initialisation of the connectionManager. JDK's implementation has no cap on the
+ * number of connections it can create.</li>
+ * </ol>
+ */
+public final class KeepAliveCache
+    extends HashMap<KeepAliveCache.KeepAliveKey, KeepAliveCache.ClientVector>
+    implements Runnable {
+
+  private boolean threadShouldPause = true;
+
+  private boolean threadShouldRun = true;
+
+  private int maxConn;
+
+  private KeepAliveCache() {
+    Thread thread = new Thread(this);
+    thread.start();
+    setMaxConn();
+  }
+
+  private void setMaxConn() {
+    String sysPropMaxConn = System.getProperty(HTTP_MAX_CONN_SYS_PROP);
+    if (sysPropMaxConn == null) {
+      maxConn = DEFAULT_MAX_CONN_SYS_PROP;
+    } else {
+      maxConn = Integer.parseInt(sysPropMaxConn);

Review Comment:
   It would be good to catch NumberFormatException here and return default in case of exception.
   
   Infact Integer.parseInt will take care of null input as well.   
   



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,423 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.PathIOException;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.EMPTY_STRING;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  private static volatile AbfsApacheHttpClient ABFS_APACHE_HTTP_CLIENT;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,

Review Comment:
   Should we have read timeout and connection timeout passed as constructor parameters here as we are doing for AbfsHttpOperation??
   
   Proposing this to have a common contract between both the sub classes of HttpOperation.
   These can be moved to parent class itself along with other constructs like getClientRequestId, getResponseHeaders, etc



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/HttpOperationType.java:
##########
@@ -0,0 +1,24 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;

Review Comment:
   I feel this should be moved to constants



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/kac/KeepAliveCache.java:
##########
@@ -0,0 +1,345 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services.kac;

Review Comment:
   Why do we need a separate package for this??



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsApacheHttpClient.java:
##########
@@ -0,0 +1,93 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.config.RequestConfig;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.config.Registry;
+import org.apache.http.config.RegistryBuilder;
+import org.apache.http.conn.socket.ConnectionSocketFactory;
+import org.apache.http.conn.socket.PlainConnectionSocketFactory;
+import org.apache.http.conn.ssl.SSLConnectionSocketFactory;
+import org.apache.http.impl.client.CloseableHttpClient;
+import org.apache.http.impl.client.HttpClientBuilder;
+import org.apache.http.impl.client.HttpClients;
+
+import static org.apache.http.conn.ssl.SSLConnectionSocketFactory.getDefaultHostnameVerifier;
+
+public class AbfsApacheHttpClient {
+  private final CloseableHttpClient httpClient;
+
+  private final AbfsConfiguration abfsConfiguration;
+
+  public AbfsApacheHttpClient(DelegatingSSLSocketFactory delegatingSSLSocketFactory,
+      final AbfsConfiguration abfsConfiguration) {
+    this.abfsConfiguration = abfsConfiguration;
+    final AbfsConnectionManager connMgr = new AbfsConnectionManager(
+        createSocketFactoryRegistry(
+            new SSLConnectionSocketFactory(delegatingSSLSocketFactory,
+                getDefaultHostnameVerifier())),
+        new org.apache.hadoop.fs.azurebfs.services.AbfsConnFactory());
+    final HttpClientBuilder builder = HttpClients.custom();
+    builder.setConnectionManager(connMgr)
+        .setRequestExecutor(new AbfsManagedHttpRequestExecutor(
+            abfsConfiguration.getHttpReadTimeout()))
+        .disableContentCompression()
+        .disableRedirectHandling()
+        .disableAutomaticRetries()
+        .setUserAgent(
+            ""); // SDK will set the user agent header in the pipeline. Don't let Apache waste time

Review Comment:
   Also, comments need to be updated. We don't use SDK.



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ApacheHttpClientHealthMonitor.java:
##########
@@ -0,0 +1,33 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+
+public final class ApacheHttpClientHealthMonitor {

Review Comment:
   What I feel is we still should keep it different from AbfsRestOperation. but I feel there are a lot of new classes related to Apache Cleint. We can try to move them to a common file and have multiple classes defined. If we can convert some into internal classes that would also be good.
   
   Just a thought.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1542814746


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsReadWriteAndSeek.java:
##########
@@ -55,22 +56,47 @@ public class ITestAbfsReadWriteAndSeek extends AbstractAbfsScaleTest {
    * For test performance, a full x*y test matrix is not used.
    * @return the test parameters
    */
-  @Parameterized.Parameters(name = "Size={0}-readahead={1}")
+  @Parameterized.Parameters(name = "Size={0}-readahead={1}-Client={2}")
   public static Iterable<Object[]> sizes() {
-    return Arrays.asList(new Object[][]{{MIN_BUFFER_SIZE, true},
-        {DEFAULT_READ_BUFFER_SIZE, false},
-        {DEFAULT_READ_BUFFER_SIZE, true},
-        {APPENDBLOB_MAX_WRITE_BUFFER_SIZE, false},
-        {MAX_BUFFER_SIZE, true}});
+    return Arrays.asList(new Object[][]{

Review Comment:
   formatting is not consistent



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "saxenapranav (via GitHub)" <gi...@apache.org>.
saxenapranav commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1549007840


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,423 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.PathIOException;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.EMPTY_STRING;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  private static volatile AbfsApacheHttpClient ABFS_APACHE_HTTP_CLIENT;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,

Review Comment:
   Taken, passing read and connect timeout in constructor.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Re: [PR] HADOOP-19120. ApacheHttpClient adaptation in ABFS. [hadoop]

Posted by "anmolanmol1234 (via GitHub)" <gi...@apache.org>.
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1562362652


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -0,0 +1,422 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsApacheHttpExpect100Exception;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPatch;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.util.EntityUtils;
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.APACHE_IMPL;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_DELETE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_HEAD;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PATCH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_POST;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_PUT;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID;
+import static org.apache.http.entity.ContentType.TEXT_PLAIN;
+
+/**
+ * Implementation of {@link HttpOperation} for orchestrating server calls using
+ * Apache Http Client.
+ */
+public class AbfsAHCHttpOperation extends HttpOperation {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+      AbfsAHCHttpOperation.class);
+
+  /**
+   * Map to store the AbfsApacheHttpClient. Each instance of AbfsClient to have
+   * a unique AbfsApacheHttpClient instance. The key of the map is the UUID of the client.
+   */
+  private static final Map<String, AbfsApacheHttpClient>
+      ABFS_APACHE_HTTP_CLIENT_MAP = new HashMap<>();
+
+  private AbfsApacheHttpClient abfsApacheHttpClient;
+
+  private HttpRequestBase httpRequestBase;
+
+  private HttpResponse httpResponse;
+
+  private AbfsManagedHttpContext abfsHttpClientContext;
+
+  private final AbfsRestOperationType abfsRestOperationType;
+
+  private boolean connectionDisconnectedOnError = false;
+
+  private AbfsApacheHttpExpect100Exception abfsApacheHttpExpect100Exception;
+
+  private final boolean isPayloadRequest;
+
+  private List<AbfsHttpHeader> requestHeaders;
+
+  private AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final List<AbfsHttpHeader> requestHeaders,
+      final AbfsConfiguration abfsConfiguration,
+      final String clientId,
+      final AbfsRestOperationType abfsRestOperationType) {
+    super(LOG, url, method);
+    this.abfsRestOperationType = abfsRestOperationType;
+    this.requestHeaders = requestHeaders;
+    setAbfsApacheHttpClient(abfsConfiguration, clientId);
+    this.isPayloadRequest = isPayloadRequest(method);
+  }
+
+  public AbfsAHCHttpOperation(final URL url,
+      final String method,
+      final ArrayList<AbfsHttpHeader> requestHeaders,
+      final int httpStatus) {
+    this(url, method, requestHeaders, null);
+    setStatusCode(httpStatus);
+  }
+
+  private void setAbfsApacheHttpClient(final AbfsConfiguration abfsConfiguration,
+      final String clientId) {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+    if (client == null) {
+      synchronized (ABFS_APACHE_HTTP_CLIENT_MAP) {
+        client = ABFS_APACHE_HTTP_CLIENT_MAP.get(clientId);
+        if (client == null) {
+          client = new AbfsApacheHttpClient(
+              DelegatingSSLSocketFactory.getDefaultFactory(),
+              abfsConfiguration);
+          ABFS_APACHE_HTTP_CLIENT_MAP.put(clientId, client);
+        }
+      }
+    }
+    abfsApacheHttpClient = client;
+  }
+
+  static void removeClient(final String clientId) throws IOException {
+    AbfsApacheHttpClient client = ABFS_APACHE_HTTP_CLIENT_MAP.remove(clientId);
+    if (client != null) {
+      client.close();
+    }
+  }
+
+  @VisibleForTesting
+  AbfsManagedHttpContext setFinalAbfsClientContext() {
+    return new AbfsManagedHttpContext();
+  }
+
+  private boolean isPayloadRequest(final String method) {
+    return HTTP_METHOD_PUT.equals(method) || HTTP_METHOD_PATCH.equals(method)
+        || HTTP_METHOD_POST.equals(method);
+  }
+
+
+  public static AbfsAHCHttpOperation getAbfsApacheHttpClientHttpOperationWithFixedResult(
+      final URL url,
+      final String method,
+      final int httpStatus) {
+    return new AbfsAHCHttpOperation(url, method, new ArrayList<>(), httpStatus);
+  }
+
+  @Override
+  protected InputStream getErrorStream() throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity == null) {
+      return null;
+    }
+    return entity.getContent();
+  }
+
+  @Override
+  String getConnProperty(final String key) {
+    for (AbfsHttpHeader header : requestHeaders) {
+      if (header.getName().equals(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  @Override
+  URL getConnUrl() {
+    return getUrl();
+  }
+
+  @Override
+  String getConnRequestMethod() {
+    return getMethod();
+  }
+
+  @Override
+  Integer getConnResponseCode() throws IOException {
+    return getStatusCode();
+  }
+
+  @Override
+  String getConnResponseMessage() throws IOException {
+    return getStatusDescription();
+  }
+
+  public void processResponse(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    try {
+      if (!isPayloadRequest) {
+        prepareRequest();
+        httpResponse = executeRequest();
+      }
+      parseResponseHeaderAndBody(buffer, offset, length);
+    } finally {
+      if (httpResponse != null) {
+        EntityUtils.consume(httpResponse.getEntity());
+      }
+      if (httpResponse != null
+          && httpResponse instanceof CloseableHttpResponse) {
+        ((CloseableHttpResponse) httpResponse).close();
+      }
+    }
+  }
+
+  @VisibleForTesting
+  void parseResponseHeaderAndBody(final byte[] buffer,
+      final int offset,
+      final int length) throws IOException {
+    setStatusCode(httpResponse.getStatusLine().getStatusCode());
+
+    setStatusDescription(httpResponse.getStatusLine().getReasonPhrase());
+
+    String requestId = getResponseHeader(
+        HttpHeaderConfigurations.X_MS_REQUEST_ID);
+    if (requestId == null) {
+      requestId = AbfsHttpConstants.EMPTY_STRING;
+    }
+    setRequestId(requestId);
+
+    // dump the headers
+    AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
+        getResponseHeaders(httpResponse));
+    parseResponse(buffer, offset, length);
+  }
+
+  @VisibleForTesting
+  HttpResponse executeRequest() throws IOException {

Review Comment:
   I was referring to the client managed context



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org