You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2021/03/23 16:23:37 UTC

[GitHub] [hadoop] steveloughran opened a new pull request #2807: S3/hadoop 17511 auditing

steveloughran opened a new pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807


   
   Notion of AuditSpan which is created for a given operation; goal is
   to pass it along everywhere.
   
   It's thread local per FS-instance; store operations pick this up in their
   constructor from the StoreContext.
   
   The entryPoint() method in S3A FS has been enhanced to initiate the spans.
   For this to work, internal code SHALL NOT call those entry points (Done)
   and all public API points MUST be declared as entry points. 
   
   This is done, with a marker attribute `@AuditEntryPoint` to indicate this.
   
   The audit span create/deactivate sequence is ~the same as the duration tracking
   so the operation is generally merged: most of the metrics
   S3AFS collects are now durations
   
   Part of the isolation into spans means that there's explicit operations
   for mkdirs() and getContentSummary()
   
   The auditing is intended to be a plugin point; currently there is 
   the LoggingAuditor which
   * logs at debug
   * adds an HTTP "referer" header with audit tracing
   * can be set to raise an exception if the SDK is handed an AWS Request and there's no active span (skipped for the multipart upload part and complete calls as TransferManager in the SDK does that out of span).
   
   NoopAuditor which: 
   *does nothing
   
   A recent change is that we want every span to have a spanID (string, unique across all spans of that FS instance); even the no-op span has unique IDs. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-847159189


   rebased to trunk again after the AWS region patch from mehakmeet


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-818733840


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 43 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 37s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m  1s |  |  trunk passed  |
   | -1 :x: |  compile  |  18m 29s | [/branch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/branch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) |  root in trunk failed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.  |
   | +1 :green_heart: |  compile  |  19m 22s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 15s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 36s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 30s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  21m 30s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 188 new + 1750 unchanged - 0 fixed = 1938 total (was 1750)  |
   | +1 :green_heart: |  compile  |  20m  7s |  |  the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  javac  |  20m  7s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1832 unchanged - 1 fixed = 1833 total (was 1833)  |
   | -1 :x: |  blanks  |   0m  0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/blanks-eol.txt) |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 49s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 7 new + 185 unchanged - 4 fixed = 192 total (was 189)  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javadoc  |   0m 39s | [/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 80 unchanged - 8 fixed = 86 total (was 88)  |
   | -1 :x: |  spotbugs  |   1m 30s | [/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html) |  hadoop-tools/hadoop-aws generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  15m 14s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 36s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m  6s |  |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 198m 38s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:[line 158] |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:[line 117] |
   |  |  Unwritten field:NoopAuditManager.java:[line 110] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint xml |
   | uname | Linux 83f985da7c49 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b3fc4432115d065c124749082e1773f3b9c287ed |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/testReport/ |
   | Max. process+thread count | 1827 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus removed a comment on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-811485602


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 42 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  4s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 35s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  17m 54s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 43s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  1s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  20m  1s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 2 new + 2034 unchanged - 1 fixed = 2036 total (was 2035)  |
   | +1 :green_heart: |  compile  |  18m  0s |  |  the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  javac  |  18m  0s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1929 unchanged - 1 fixed = 1930 total (was 1930)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   3m 46s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 41 new + 185 unchanged - 4 fixed = 226 total (was 189)  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javadoc  |   0m 44s | [/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 80 unchanged - 8 fixed = 86 total (was 88)  |
   | -1 :x: |  spotbugs  |   1m 34s | [/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html) |  hadoop-tools/hadoop-aws generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  14m 53s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 26s |  |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |   2m 32s | [/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt) |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 196m 31s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:[line 158] |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:[line 117] |
   |  |  Unwritten field:NoopAuditManager.java:[line 110] |
   | Failed junit tests | hadoop.fs.s3a.audit.TestLoggingAuditor |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint xml |
   | uname | Linux 2033a173c143 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5cd2126f0303cc455a52bac2202955b3bf721a81 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/testReport/ |
   | Max. process+thread count | 1279 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] bogthe commented on a change in pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
bogthe commented on a change in pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#discussion_r633141325



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/store/HttpReferrerAuditHeader.java
##########
@@ -0,0 +1,500 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.store;
+
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.nio.charset.StandardCharsets;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Set;
+import java.util.StringJoiner;
+import java.util.function.Supplier;
+import java.util.stream.Collectors;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.fs.store.audit.CommonAuditContext;
+import org.apache.http.NameValuePair;
+import org.apache.http.client.utils.URLEncodedUtils;
+
+import static java.util.Objects.requireNonNull;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_ID;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_OP;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_PATH;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_PATH2;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.REFERRER_ORIGIN_HOST;
+
+/**
+ * Contains all the logic for generating an HTTP "Referer"
+ * entry; includes escaping query params.
+ * Tests for this are in
+ * {@code org.apache.hadoop.fs.s3a.audit.TestHttpReferrerAuditHeader}
+ * so as to verify that header generation in the S3A auditors, and
+ * S3 log parsing, all work.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Unstable
+public final class HttpReferrerAuditHeader {
+
+  /**
+   * Format of path to build: {@value}.
+   * the params passed in are (context ID, span ID, op)
+   */
+  public static final String REFERRER_PATH_FORMAT = "/%3$s/%2$s/";
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(HttpReferrerAuditHeader.class);
+
+  /**
+   * Log for warning of problems creating headers will only log of
+   * a problem once per process instance.
+   * This is to avoid logs being flooded with errors.
+   */
+  private static final LogExactlyOnce WARN_OF_URL_CREATION =
+      new LogExactlyOnce(LOG);
+
+  /** Context ID. */
+  private final String contextId;
+
+  /** operation name. */
+  private final String operationName;
+
+  /** Span ID. */
+  private final String spanId;
+
+  /** optional first path. */
+  private final String path1;
+
+  /** optional second path. */
+  private final String path2;
+
+  /**
+   * The header as created in the constructor; used in toString().
+   * A new header is built on demand in {@link #buildHttpReferrer()}
+   * so that evaluated attributes are dynamically evaluated
+   * in the correct thread/place.
+   */
+  private final String initialHeader;
+
+  /**
+   * Map of simple attributes.
+   */
+  private final Map<String, String> attributes;
+
+  /**
+   * Parameters dynamically evaluated on the thread just before
+   * the request is made.
+   */
+  private final Map<String, Supplier<String>> evaluated;
+
+  /**
+   * Elements to filter from the final header.
+   */
+  private final Set<String> filter;
+
+  /**
+   * Instantiate.
+   *
+   * Context and operationId are expected to be well formed
+   * numeric/hex strings, at least adequate to be
+   * used as individual path elements in a URL.
+   */
+  private HttpReferrerAuditHeader(
+      final Builder builder) {
+    this.contextId = requireNonNull(builder.contextId);
+    this.evaluated = builder.evaluated;
+    this.filter = builder.filter;
+    this.operationName = requireNonNull(builder.operationName);
+    this.path1 = builder.path1;
+    this.path2 = builder.path2;
+    this.spanId = requireNonNull(builder.spanId);
+
+    // copy the parameters from the builder and extend
+    attributes = builder.attributes;
+
+    addAttribute(PARAM_OP, operationName);
+    addAttribute(PARAM_PATH, path1);
+    addAttribute(PARAM_PATH2, path2);
+    addAttribute(PARAM_ID, spanId);
+
+    // patch in global context values where not set
+    Iterable<Map.Entry<String, String>> globalContextValues
+        = builder.globalContextValues;
+    if (globalContextValues != null) {
+      for (Map.Entry<String, String> entry : globalContextValues) {
+        attributes.putIfAbsent(entry.getKey(), entry.getValue());

Review comment:
       What are the implications of merging multiple `globalContextValues` maps into a single one (i.e. `attributes`). Will there be a situation where different contexts have the same `key` but different `values`? It doesn't seem too bad, maybe a warning in the comments / documentation for this scenario is enough?

##########
File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/store/audit/TestCommonAuditContext.java
##########
@@ -0,0 +1,161 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.store.audit;
+
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.stream.Collectors;
+import java.util.stream.StreamSupport;
+
+import org.assertj.core.api.AbstractStringAssert;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_COMMAND;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_PROCESS;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_THREAD1;
+import static org.apache.hadoop.fs.store.audit.CommonAuditContext.PROCESS_ID;
+import static org.apache.hadoop.fs.store.audit.CommonAuditContext.clearGlobalContextEntry;
+import static org.apache.hadoop.fs.store.audit.CommonAuditContext.currentAuditContext;
+import static org.apache.hadoop.fs.store.audit.CommonAuditContext.getGlobalContextEntry;
+import static org.apache.hadoop.fs.store.audit.CommonAuditContext.getGlobalContextEntries;
+import static org.apache.hadoop.fs.store.audit.CommonAuditContext.noteEntryPoint;
+import static org.apache.hadoop.fs.store.audit.CommonAuditContext.setGlobalContextEntry;
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * Tests of the common audit context.
+ */
+public class TestCommonAuditContext extends AbstractHadoopTestBase {
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TestCommonAuditContext.class);
+
+  private final CommonAuditContext context = currentAuditContext();
+  /**
+   * We can set, get and enumerate global context values.
+   */
+  @Test
+  public void testGlobalSetGetEnum() throws Throwable {
+
+    String s = "command";
+    setGlobalContextEntry(PARAM_COMMAND, s);
+    assertGlobalEntry(PARAM_COMMAND)
+        .isEqualTo(s);
+    // and the iterators.
+    List<Map.Entry<String, String>> list = StreamSupport
+        .stream(getGlobalContextEntries().spliterator(),
+            false)
+        .filter(e -> e.getKey().equals(PARAM_COMMAND))
+        .collect(Collectors.toList());
+    assertThat(list)
+        .hasSize(1)
+        .allMatch(e -> e.getValue().equals(s));
+  }
+
+  @Test
+  public void testVerifyProcessID() throws Throwable {
+    assertThat(
+        getGlobalContextEntry(PARAM_PROCESS))
+        .describedAs("global context value of %s", PARAM_PROCESS)
+        .isEqualTo(PROCESS_ID);
+  }
+
+
+  @Test
+  public void testNullValue() throws Throwable {
+    assertThat(context.get(PARAM_PROCESS))
+        .describedAs("Value of context element %s", PARAM_PROCESS)
+        .isNull();
+  }
+
+  @Test
+  public void testThreadId() throws Throwable {
+    String t1 = getContextValue(PARAM_THREAD1);
+    Long tid = Long.valueOf(t1);
+    assertThat(tid).describedAs("thread ID")
+        .isEqualTo(Thread.currentThread().getId());
+  }
+
+  /**
+   * Verify functions are dynamically evaluated.
+   */
+  @Test
+  public void testDynamicEval() throws Throwable {
+    context.reset();
+    final AtomicBoolean ab = new AtomicBoolean(false);
+    context.put("key", () ->
+        Boolean.toString(ab.get()));
+    assertContextValue("key")
+        .isEqualTo("false");
+    // update the reference and the next get call will
+    // pick up the new value.
+    ab.set(true);
+    assertContextValue("key")
+        .isEqualTo("true");
+  }
+
+  private String getContextValue(final String key) {
+    String val = context.get(key);
+    assertThat(val).isNotBlank();
+    return val;
+  }
+
+  /**
+   * Start an assertion on a context value.
+   * @param key key to look up
+   * @return an assert which can be extended call
+   */
+  private AbstractStringAssert<?> assertContextValue(final String key) {
+    String val = context.get(key);
+    return assertThat(val)
+        .describedAs("Value of context element %s", key)
+        .isNotBlank();
+  }
+
+  @Test
+  public void testNoteEntryPoint() throws Throwable {
+    setAndAssertEntryPoint(this).isEqualTo("TestCommonAuditContext");
+

Review comment:
       nit: extra space

##########
File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/StoreContext.java
##########
@@ -117,13 +122,17 @@
   /**
    * Source of time.
    */
-  private ITtlTimeProvider timeProvider;
+
+  /** Time source for S3Guard TTLs. */
+  private final ITtlTimeProvider timeProvider;
+
+  /** Operation Auditor. */
+  private final AuditSpanSource<AuditSpanS3A> auditor;
 
   /**
    * Instantiate.
-   * @deprecated as public method: use {@link StoreContextBuilder}.
    */
-  public StoreContext(
+  StoreContext(

Review comment:
       nit: is access modifier intentionally left out?

##########
File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
##########
@@ -2430,13 +2749,16 @@ PutObjectResult putObjectDirect(PutObjectRequest putObjectRequest)
     LOG.debug("PUT {} bytes to {}", len, putObjectRequest.getKey());
     incrementPutStartStatistics(len);
     try {
-      PutObjectResult result = s3.putObject(putObjectRequest);
+      PutObjectResult result = trackDurationOfSupplier(
+          getDurationTrackerFactory(),
+          OBJECT_PUT_REQUESTS.getSymbol(), () ->
+              s3.putObject(putObjectRequest));
       incrementPutCompletedStatistics(true, len);
       // update metadata
       finishedWrite(putObjectRequest.getKey(), len,
           result.getETag(), result.getVersionId(), null);
       return result;
-    } catch (AmazonClientException e) {
+    } catch (SdkBaseException e) {

Review comment:
       Any reason for moving to `SdkBaseException`? I see this `putObjectDirect` method signals it's throwing `AmazonClientException`, no bugs just small inconsistency.  

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/store/audit/AuditingFunctions.java
##########
@@ -0,0 +1,118 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.store.audit;
+
+import javax.annotation.Nullable;
+import java.util.concurrent.Callable;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.util.functional.CallableRaisingIOE;
+import org.apache.hadoop.util.functional.FunctionRaisingIOE;
+import org.apache.hadoop.util.functional.InvocationRaisingIOE;
+
+/**
+ * Static methods to assist in working with Audit Spans.
+ * the {@code withinX} calls take a span and a closure/function etc.
+ * and return a new function of the same types but which will
+ * activate and the span.
+ * They do not deactivate it afterwards to avoid accidentally deactivating
+ * the already-active span during a chain of operations in the same thread.
+ * All they do is ensure that the given span is guaranteed to be
+ * active when the passed in callable/function/invokable is evaluated.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Unstable
+public class AuditingFunctions {
+
+  /**
+   * Given a callable, return a new callable which
+   * activates and deactivates the span around the inner invocation.

Review comment:
       Comment out of date. This mentions the callable `activates` and `deactivates` the span while to class commend mentions that `They do not deactivate it afterwards...`. Callable also contains no call to deactivate.
   
   Same comment applies for all methods in this class. 

##########
File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RequestFactoryImpl.java
##########
@@ -0,0 +1,695 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+
+import com.amazonaws.AmazonWebServiceRequest;
+import com.amazonaws.services.s3.model.AbortMultipartUploadRequest;
+import com.amazonaws.services.s3.model.CannedAccessControlList;
+import com.amazonaws.services.s3.model.CompleteMultipartUploadRequest;
+import com.amazonaws.services.s3.model.CopyObjectRequest;
+import com.amazonaws.services.s3.model.DeleteObjectRequest;
+import com.amazonaws.services.s3.model.DeleteObjectsRequest;
+import com.amazonaws.services.s3.model.GetObjectMetadataRequest;
+import com.amazonaws.services.s3.model.GetObjectRequest;
+import com.amazonaws.services.s3.model.InitiateMultipartUploadRequest;
+import com.amazonaws.services.s3.model.ListMultipartUploadsRequest;
+import com.amazonaws.services.s3.model.ListNextBatchOfObjectsRequest;
+import com.amazonaws.services.s3.model.ListObjectsRequest;
+import com.amazonaws.services.s3.model.ListObjectsV2Request;
+import com.amazonaws.services.s3.model.ObjectListing;
+import com.amazonaws.services.s3.model.ObjectMetadata;
+import com.amazonaws.services.s3.model.PartETag;
+import com.amazonaws.services.s3.model.PutObjectRequest;
+import com.amazonaws.services.s3.model.SSEAwsKeyManagementParams;
+import com.amazonaws.services.s3.model.SSECustomerKey;
+import com.amazonaws.services.s3.model.SelectObjectContentRequest;
+import com.amazonaws.services.s3.model.UploadPartRequest;
+import org.apache.hadoop.thirdparty.com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.PathIOException;
+import org.apache.hadoop.fs.s3a.Retries;
+import org.apache.hadoop.fs.s3a.S3AEncryptionMethods;
+import org.apache.hadoop.fs.s3a.api.RequestFactory;
+import org.apache.hadoop.fs.s3a.auth.delegation.EncryptionSecretOperations;
+import org.apache.hadoop.fs.s3a.auth.delegation.EncryptionSecrets;
+
+import static org.apache.commons.lang3.StringUtils.isNotEmpty;
+import static org.apache.hadoop.fs.s3a.impl.InternalConstants.DEFAULT_UPLOAD_PART_COUNT_LIMIT;
+import static org.apache.hadoop.thirdparty.com.google.common.base.Preconditions.checkArgument;
+import static org.apache.hadoop.thirdparty.com.google.common.base.Preconditions.checkNotNull;
+
+/**
+ * The standard implementation of the request factory.
+ * This creates AWS SDK request classes for the specific bucket,
+ * with standard options/headers set.
+ * It is also where custom setting parameters can take place.
+ *
+ * All creation of AWS S3 requests MUST be through this class so that
+ * common options (encryption etc.) can be added here,
+ * and so that any chained transformation of requests can be applied.
+ *
+ * This is where audit span information is added to the requests,
+ * until it is done in the AWS SDK itself.
+ *
+ * All created requests will be passed through
+ * {@link PrepareRequest#prepareRequest(AmazonWebServiceRequest)} before
+ * being returned to the caller.
+ */
+public class RequestFactoryImpl implements RequestFactory {
+
+  public static final Logger LOG = LoggerFactory.getLogger(
+      RequestFactoryImpl.class);
+
+  /**
+   * Target bucket.
+   */
+  private final String bucket;
+
+  /**
+   * Encryption secrets.
+   */
+  private EncryptionSecrets encryptionSecrets;
+
+  /**
+   * ACL For new objects.
+   */
+  private final CannedAccessControlList cannedACL;
+
+  /**
+   * Max number of multipart entries allowed in a large
+   * upload. Tunable for testing only.
+   */
+  private final long multipartPartCountLimit;
+
+  /**
+   * Requester Pays.
+   * This is to be wired up in a PR with its
+   * own tests and docs.
+   */
+  private final boolean requesterPays;
+
+  /**
+   * Callback to prepare requests.
+   */
+  private final PrepareRequest requestPreparer;
+
+  /**
+   * Constructor.
+   * @param builder builder with all the configuration.
+   */
+  protected RequestFactoryImpl(
+      final RequestFactoryBuilder builder) {
+    this.bucket = builder.bucket;
+    this.cannedACL = builder.cannedACL;
+    this.encryptionSecrets = builder.encryptionSecrets;
+    this.multipartPartCountLimit = builder.multipartPartCountLimit;
+    this.requesterPays = builder.requesterPays;
+    this.requestPreparer = builder.requestPreparer;
+  }
+
+  /**
+   * Preflight preparation of AWS request.
+   * @param <T> web service request
+   * @return prepared entry.
+   */
+  @Retries.OnceRaw
+  private <T extends AmazonWebServiceRequest> T prepareRequest(T t) {
+    return requestPreparer != null
+        ? requestPreparer.prepareRequest(t)
+        : t;
+  }
+
+  /**
+   * Get the canned ACL of this FS.
+   * @return an ACL, if any
+   */
+  @Override
+  public CannedAccessControlList getCannedACL() {
+    return cannedACL;
+  }
+
+  /**
+   * Get the target bucket.
+   * @return the bucket.
+   */
+  protected String getBucket() {
+    return bucket;
+  }
+
+  /**
+   * Create the AWS SDK structure used to configure SSE,
+   * if the encryption secrets contain the information/settings for this.
+   * @return an optional set of KMS Key settings
+   */
+  @Override
+  public Optional<SSEAwsKeyManagementParams> generateSSEAwsKeyParams() {
+    return EncryptionSecretOperations.createSSEAwsKeyManagementParams(
+        encryptionSecrets);
+  }
+
+  /**
+   * Create the SSE-C structure for the AWS SDK, if the encryption secrets
+   * contain the information/settings for this.
+   * This will contain a secret extracted from the bucket/configuration.
+   * @return an optional customer key.
+   */
+  @Override
+  public Optional<SSECustomerKey> generateSSECustomerKey() {
+    return EncryptionSecretOperations.createSSECustomerKey(
+        encryptionSecrets);
+  }
+
+  /**
+   * Get the encryption algorithm of this endpoint.
+   * @return the encryption algorithm.
+   */
+  @Override
+  public S3AEncryptionMethods getServerSideEncryptionAlgorithm() {
+    return encryptionSecrets.getEncryptionMethod();
+  }
+
+  /**
+   * Sets server side encryption parameters to the part upload
+   * request when encryption is enabled.
+   * @param request upload part request
+   */
+  protected void setOptionalUploadPartRequestParameters(
+      UploadPartRequest request) {
+    generateSSECustomerKey().ifPresent(request::setSSECustomerKey);
+  }
+
+  /**
+   * Sets server side encryption parameters to the GET reuquest.
+   * request when encryption is enabled.
+   * @param request upload part request
+   */
+  protected void setOptionalGetObjectMetadataParameters(
+      GetObjectMetadataRequest request) {
+    generateSSECustomerKey().ifPresent(request::setSSECustomerKey);
+  }
+
+  /**
+   * Set the optional parameters when initiating the request (encryption,
+   * headers, storage, etc).
+   * @param request request to patch.
+   */
+  protected void setOptionalMultipartUploadRequestParameters(
+      InitiateMultipartUploadRequest request) {
+    generateSSEAwsKeyParams().ifPresent(request::setSSEAwsKeyManagementParams);
+    generateSSECustomerKey().ifPresent(request::setSSECustomerKey);
+  }
+
+  /**
+   * Set the optional parameters for a PUT request.
+   * @param request request to patch.
+   */
+  protected void setOptionalPutRequestParameters(PutObjectRequest request) {
+    generateSSEAwsKeyParams().ifPresent(request::setSSEAwsKeyManagementParams);
+    generateSSECustomerKey().ifPresent(request::setSSECustomerKey);
+  }
+
+  /**
+   * Set the optional metadata for an object being created or copied.
+   * @param metadata to update.
+   */
+  protected void setOptionalObjectMetadata(ObjectMetadata metadata) {
+    final S3AEncryptionMethods algorithm
+        = getServerSideEncryptionAlgorithm();
+    if (S3AEncryptionMethods.SSE_S3 == algorithm) {
+      metadata.setSSEAlgorithm(algorithm.getMethod());
+    }
+  }
+
+  /**
+   * Create a new object metadata instance.
+   * Any standard metadata headers are added here, for example:
+   * encryption.
+   *
+   * @param length length of data to set in header; Ignored if negative
+   * @return a new metadata instance
+   */
+  @Override
+  public ObjectMetadata newObjectMetadata(long length) {
+    final ObjectMetadata om = new ObjectMetadata();
+    setOptionalObjectMetadata(om);
+    if (length >= 0) {
+      om.setContentLength(length);
+    }
+    return om;
+  }
+
+  @Override
+  public CopyObjectRequest newCopyObjectRequest(String srcKey,
+      String dstKey,
+      ObjectMetadata srcom) {
+    CopyObjectRequest copyObjectRequest =
+        new CopyObjectRequest(getBucket(), srcKey, getBucket(), dstKey);
+    ObjectMetadata dstom = newObjectMetadata(srcom.getContentLength());
+    HeaderProcessing.cloneObjectMetadata(srcom, dstom);
+    setOptionalObjectMetadata(dstom);
+    copyEncryptionParameters(srcom, copyObjectRequest);
+    copyObjectRequest.setCannedAccessControlList(cannedACL);
+    copyObjectRequest.setNewObjectMetadata(dstom);
+    Optional.ofNullable(srcom.getStorageClass())
+        .ifPresent(copyObjectRequest::setStorageClass);
+    return prepareRequest(copyObjectRequest);
+  }
+
+  /**
+   * Propagate encryption parameters from source file if set else use the
+   * current filesystem encryption settings.
+   * @param srcom source object metadata.
+   * @param copyObjectRequest copy object request body.
+   */
+  protected void copyEncryptionParameters(
+      ObjectMetadata srcom,
+      CopyObjectRequest copyObjectRequest) {
+    String sourceKMSId = srcom.getSSEAwsKmsKeyId();
+    if (isNotEmpty(sourceKMSId)) {
+      // source KMS ID is propagated
+      LOG.debug("Propagating SSE-KMS settings from source {}",
+          sourceKMSId);
+      copyObjectRequest.setSSEAwsKeyManagementParams(
+          new SSEAwsKeyManagementParams(sourceKMSId));
+    }
+    switch (getServerSideEncryptionAlgorithm()) {
+    case SSE_S3:
+      /* no-op; this is set in destination object metadata */
+      break;
+
+    case SSE_C:
+      generateSSECustomerKey().ifPresent(customerKey -> {
+        copyObjectRequest.setSourceSSECustomerKey(customerKey);
+        copyObjectRequest.setDestinationSSECustomerKey(customerKey);
+      });
+      break;
+
+    case SSE_KMS:
+      generateSSEAwsKeyParams().ifPresent(
+          copyObjectRequest::setSSEAwsKeyManagementParams);
+      break;
+    default:
+    }
+  }
+  /**
+   * Create a putObject request.
+   * Adds the ACL and metadata
+   * @param key key of object
+   * @param metadata metadata header
+   * @param srcfile source file
+   * @return the request
+   */
+  @Override
+  public PutObjectRequest newPutObjectRequest(String key,
+      ObjectMetadata metadata, File srcfile) {
+    Preconditions.checkNotNull(srcfile);
+    PutObjectRequest putObjectRequest = new PutObjectRequest(getBucket(), key,
+        srcfile);
+    setOptionalPutRequestParameters(putObjectRequest);
+    putObjectRequest.setCannedAcl(cannedACL);
+    putObjectRequest.setMetadata(metadata);
+    return prepareRequest(putObjectRequest);
+  }
+
+  /**
+   * Create a {@link PutObjectRequest} request.
+   * The metadata is assumed to have been configured with the size of the
+   * operation.
+   * @param key key of object
+   * @param metadata metadata header
+   * @param inputStream source data.
+   * @return the request
+   */
+  @Override
+  public PutObjectRequest newPutObjectRequest(String key,
+      ObjectMetadata metadata,
+      InputStream inputStream) {
+    Preconditions.checkNotNull(inputStream);
+    Preconditions.checkArgument(isNotEmpty(key), "Null/empty key");
+    PutObjectRequest putObjectRequest = new PutObjectRequest(getBucket(), key,
+        inputStream, metadata);
+    setOptionalPutRequestParameters(putObjectRequest);
+    putObjectRequest.setCannedAcl(cannedACL);
+    return prepareRequest(putObjectRequest);
+  }
+
+  @Override
+  public PutObjectRequest newDirectoryMarkerRequest(String directory) {
+    String key = directory.endsWith("/")
+        ? directory
+        : (directory + "/");
+    // an input stream which is laways empty
+    final InputStream im = new InputStream() {
+      @Override
+      public int read() throws IOException {
+        return -1;
+      }
+    };
+    // preparation happens in here
+    final ObjectMetadata md = newObjectMetadata(0L);
+    md.setContentType(HeaderProcessing.CONTENT_TYPE_X_DIRECTORY);
+    PutObjectRequest putObjectRequest =
+        newPutObjectRequest(key, md, im);
+    return putObjectRequest;
+  }
+
+  @Override
+  public ListMultipartUploadsRequest
+      newListMultipartUploadsRequest(String prefix) {
+    ListMultipartUploadsRequest request = new ListMultipartUploadsRequest(
+        getBucket());
+    if (prefix != null) {
+      request.setPrefix(prefix);
+    }
+    return prepareRequest(request);
+  }
+
+  @Override
+  public AbortMultipartUploadRequest newAbortMultipartUploadRequest(
+      String destKey,
+      String uploadId) {
+    return prepareRequest(new AbortMultipartUploadRequest(getBucket(),
+        destKey,
+        uploadId));
+  }
+
+  @Override
+  public InitiateMultipartUploadRequest newMultipartUploadRequest(
+      String destKey) {
+    final InitiateMultipartUploadRequest initiateMPURequest =
+        new InitiateMultipartUploadRequest(getBucket(),
+            destKey,
+            newObjectMetadata(-1));
+    initiateMPURequest.setCannedACL(getCannedACL());
+    setOptionalMultipartUploadRequestParameters(initiateMPURequest);
+    return prepareRequest(initiateMPURequest);
+  }
+
+  @Override
+  public CompleteMultipartUploadRequest newCompleteMultipartUploadRequest(
+      String destKey,
+      String uploadId,
+      List<PartETag> partETags) {
+    // a copy of the list is required, so that the AWS SDK doesn't
+    // attempt to sort an unmodifiable list.
+    return prepareRequest(new CompleteMultipartUploadRequest(bucket,
+        destKey, uploadId, new ArrayList<>(partETags)));
+

Review comment:
       nit: empty line

##########
File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/S3LogParser.java
##########
@@ -0,0 +1,306 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.util.regex.Pattern;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ * Class to help parse AWS S3 Logs.
+ * see https://docs.aws.amazon.com/AmazonS3/latest/userguide/LogFormat.html
+ *
+ * Getting the regexp right is surprisingly hard; this class does it
+ * explicitly and names each group in the process.
+ * All group names are included in {@link #GROUPS} in the order
+ * within the log entries.
+ *
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Unstable
+public class S3LogParser {
+
+  /**
+   * Simple entry: anything up to a space.
+   * {@value}.
+   */
+  private static final String SIMPLE = "[^ ]*";
+
+  /**
+   * Date/Time. Everything within square braces.
+   * {@value}.
+   */
+  private static final String DATETIME = "\\[(.*?)\\]";
+
+  /**
+   * A natural number or "-".
+   * {@value}.
+   */
+  private static final String NUMBER = "(-|[0-9]*)";
+
+  /**
+   * A Quoted field or "-".
+   * {@value}.
+   */
+  private static final String QUOTED = "(-|\"[^\"]*\")";
+
+
+  /**
+   * An entry in the regexp.
+   * @param name name of the group
+   * @param pattern pattern to use in the regexp
+   * @return the pattern for the regexp
+   */
+  private static String e(String name , String pattern) {
+    return String.format("(?<%s>%s) ", name, pattern);
+  }
+
+  /**
+   * An entry in the regexp.
+   * @param name name of the group
+   * @param pattern pattern to use in the regexp
+   * @return the pattern for the regexp
+   */
+  private static String eNoTrailing(String name , String pattern) {
+    return String.format("(?<%s>%s)", name, pattern);
+  }
+
+
+  // simple entry
+
+  /**
+   * Simple entry using the {@link #SIMPLE} pattern.
+   * @param name name of the element (for code clarity only)
+   * @return the pattern for the regexp
+   */
+  private static String e(String name) {
+    return e(name, SIMPLE);
+  }
+
+  /**
+   * Quoted entry using the {@link #QUOTED} pattern.
+   * @param name name of the element (for code clarity only)
+   * @return the pattern for the regexp
+   */
+  private static String Q(String name) {

Review comment:
       nit: Why is this capital `Q` and the other is lowercase `e`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-810076217


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 18s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/6/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-828664945


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 18s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/16/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on a change in pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran commented on a change in pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#discussion_r637863394



##########
File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/TestAuditSpanLifecycle.java
##########
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.util.List;
+
+import com.amazonaws.handlers.RequestHandler2;
+import org.junit.Before;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.store.audit.AuditSpan;
+
+import static org.apache.hadoop.fs.s3a.audit.AuditTestSupport.noopAuditConfig;
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * Unit tests related to span lifecycle.
+ */
+public class TestAuditSpanLifecycle extends AbstractAuditingTest {
+
+  private Configuration conf;
+
+  private AuditSpan resetSpan;
+
+  @Before
+  public void setup() throws Exception {
+    super.setup();
+    resetSpan = getManager().getActiveAuditSpan();
+  }
+
+  protected Configuration createConfig() {
+    return noopAuditConfig();
+  }
+
+  /**
+   * Core lifecycle (remember: the service has already been started).
+   */
+  @Test
+  public void testStop() throws Throwable {
+    getManager().stop();
+  }
+
+  @Test
+  public void testCreateRequestHandlers() throws Throwable {
+    List<RequestHandler2> handlers
+        = getManager().createRequestHandlers();
+    assertThat(handlers).isNotEmpty();
+  }
+
+  @Test
+  public void testInitialSpanIsInvalid() throws Throwable {
+    assertThat(resetSpan)
+        .matches(f -> !f.isValidSpan(), "is invalid");
+  }
+
+  @Test
+  public void testCreateCloseSpan() throws Throwable {
+    AuditSpan span = getManager().createSpan("op", null, null);
+    assertThat(span)
+        .matches(AuditSpan::isValidSpan, "is valid");
+    assertActiveSpan(span);
+    // activation when already active is no-op
+    span.activate();
+    assertActiveSpan(span);
+    // close the span
+    span.close();
+    // the original span is restored.
+    assertActiveSpan(resetSpan);
+  }
+
+  @Test
+  public void testSpanActivation() throws Throwable {
+    // real activation switches spans in the current thead.
+
+    AuditSpan span1 = getManager().createSpan("op1", null, null);
+    AuditSpan span2 = getManager().createSpan("op2", null, null);
+    assertActiveSpan(span2);
+    // switch back to span 1
+    span1.activate();
+    assertActiveSpan(span1);
+    // then to span 2
+    span2.activate();
+    assertActiveSpan(span2);
+    span2.close();
+

Review comment:
       ok, added span1 close & assert still in reset span. Did something similarish in the test case underneath (accidentally; I'd navigated to the wrong line)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-842340255


   +git showing some log output during a terasort test
   https://gist.github.com/steveloughran/8e0aadb51c63f1c3538deda19ee952ae
   
   some of the events (e.g 183c9826b45486e485693808f38e2c4071004bf5dfd4c3ab210f0a21a4235ef8 ) have job ID in the referrer header "ji=job_1620911577786_0006". This is only set during the FS operations the S3A committer performs during task and job, as they're the only ones we know are explicitly related to a job. If we were confident that whichever thread called `Committer.setupTask()` was the only thread making FileSystem API calls for that task then we could set it at the task level.
   
   The`org.apache.hadoop.fs.audit.CommonAuditContext` class provides global and thread local context maps to let apps attach such attributes; the new ManifestCommitter will be setting them so that once ABFS picks up the same auditing, the context info will come down.
   
   Modified versions of Hive, Spark etc could use this API to set any of their context info when a specific thread was scheduled to work for a given query; trying to guess in the hadoop committer isn't the right place
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] HyukjinKwon commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-842879301


   cc @mswit-databricks FYI


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-841353589


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 44 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  4s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 32s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 26s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  19m 52s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 53s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 13s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 15s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  21m 15s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/22/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 3 new + 1995 unchanged - 3 fixed = 1998 total (was 1998)  |
   | +1 :green_heart: |  compile  |  19m 53s |  |  the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  javac  |  19m 53s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/22/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 3 new + 1896 unchanged - 3 fixed = 1899 total (was 1899)  |
   | -1 :x: |  blanks  |   0m  0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/22/artifact/out/blanks-eol.txt) |  The patch has 4 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 44s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/22/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 21 new + 188 unchanged - 5 fixed = 209 total (was 193)  |
   | +1 :green_heart: |  mvnsite  |   2m 26s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javadoc  |   0m 39s | [/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/22/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 63 unchanged - 25 fixed = 64 total (was 88)  |
   | -1 :x: |  spotbugs  |   1m 32s | [/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/22/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html) |  hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  15m 30s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 14s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m  6s |  |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 204m  4s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  org.apache.hadoop.fs.s3a.audit.S3LogParser.GROUPS should be package protected  At S3LogParser.java: At S3LogParser.java:[line 268] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/22/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint xml |
   | uname | Linux 3a57a6998dcd 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 33ab912ccbd0778225a7984d9d476766e7e120b2 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/22/testReport/ |
   | Max. process+thread count | 1458 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/22/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-847148990


   Somehow the header test had failed on the principal. Changes
   * how the principal is added has changed
   * fixed up the referrer entry which adding a hadoop/1 prefix change of friday was no longer a valid URI.
   
   This wasn't just a yetus failure; I replicated it locally. How did my tests pass? They didn't, but I hadn't noticed because the failsafe test run was happening anyway...and the failure of the unit tests was happening in a scrolled of test run I wasn't looking at.
   
   That's not good: I've always expected maven to fail as soon as unit tests do. Will investigate separately


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] bogthe commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
bogthe commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-845947817


   @steveloughran are any more changes coming in? I'm happy with the state of this CR, 👍  to get it merged


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-840556222


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 19s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/20/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-808185819


   I'm going to say the failures are related as its in the auditor code. interesting that you saw and not me. Will look at next week


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus removed a comment on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-818733840


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 43 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 37s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m  1s |  |  trunk passed  |
   | -1 :x: |  compile  |  18m 29s | [/branch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/branch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) |  root in trunk failed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.  |
   | +1 :green_heart: |  compile  |  19m 22s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 15s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 36s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 30s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  21m 30s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 188 new + 1750 unchanged - 0 fixed = 1938 total (was 1750)  |
   | +1 :green_heart: |  compile  |  20m  7s |  |  the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  javac  |  20m  7s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1832 unchanged - 1 fixed = 1833 total (was 1833)  |
   | -1 :x: |  blanks  |   0m  0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/blanks-eol.txt) |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 49s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 7 new + 185 unchanged - 4 fixed = 192 total (was 189)  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javadoc  |   0m 39s | [/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 80 unchanged - 8 fixed = 86 total (was 88)  |
   | -1 :x: |  spotbugs  |   1m 30s | [/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html) |  hadoop-tools/hadoop-aws generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  15m 14s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 36s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m  6s |  |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 198m 38s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:[line 158] |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:[line 117] |
   |  |  Unwritten field:NoopAuditManager.java:[line 110] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint xml |
   | uname | Linux 83f985da7c49 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b3fc4432115d065c124749082e1773f3b9c287ed |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/testReport/ |
   | Max. process+thread count | 1827 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-847125985


   legit test regression. The code to determine the principal is returning null
   ```
   [ERROR] testHeaderComplexPaths(org.apache.hadoop.fs.s3a.audit.TestHttpReferrerAuditHeader)  Time elapsed: 0.006 s  <<< FAILURE!
   org.junit.ComparisonFailure: [pr] expected:<"jenkins"> but was:<null>
   	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
   	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   	at org.apache.hadoop.fs.s3a.audit.AbstractAuditingTest.assertMapContains(AbstractAuditingTest.java:210)
   	at org.apache.hadoop.fs.s3a.audit.TestHttpReferrerAuditHeader.testHeaderComplexPaths(TestHttpReferrerAuditHeader.java:135)
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   	at java.lang.reflect.Method.invoke(Method.java:498)
   	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
   	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
   	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
   	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
   	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288)
   	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282)
   	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   	at java.lang.Thread.run(Thread.java:748)
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus removed a comment on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-806832078






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-805044041


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 20s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/1/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-805169524


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 20s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/2/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus removed a comment on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-805044041






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-847700540


   Yetus reports are a bit confused, but the output is good
   * checkstyles are mistaken/unavoidable
   * tests are good
   merging


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] dongjoon-hyun commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
dongjoon-hyun commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-841888674


   Also, cc @sunchao 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus removed a comment on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-805988234


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 24s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/3/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-846551528


   thank's for the reviews, comments, votes etc.
   I'll address all of @mehakmeet's little details, push up a rebased/squashed PR to force it through yetus, then merge


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-847334987


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 44 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m  7s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m  6s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 54s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  18m 10s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 30s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 37s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  8s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  20m  8s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/29/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 3 new + 1995 unchanged - 3 fixed = 1998 total (was 1998)  |
   | +1 :green_heart: |  compile  |  17m 58s |  |  the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  javac  |  17m 58s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/29/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 3 new + 1871 unchanged - 3 fixed = 1874 total (was 1874)  |
   | -1 :x: |  blanks  |   0m  0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/29/artifact/out/blanks-eol.txt) |  The patch has 2 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 43s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/29/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 5 new + 188 unchanged - 5 fixed = 193 total (was 193)  |
   | +1 :green_heart: |  mvnsite  |   2m 31s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  hadoop-common in the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 0 new + 63 unchanged - 25 fixed = 63 total (was 88)  |
   | +1 :green_heart: |  spotbugs  |   4m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 41s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  0s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 16s |  |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 198m 50s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/29/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint xml |
   | uname | Linux 19f577c6f5fa 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / abd89a039fdc0bd7bede3f497c76019d16ba2ed8 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/29/testReport/ |
   | Max. process+thread count | 1840 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/29/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-847149166


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 20s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/27/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-825940804


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 43 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 52s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m  1s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  21m 13s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m  8s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m  7s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m 11s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  25m 11s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/14/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 1 new + 1935 unchanged - 1 fixed = 1936 total (was 1936)  |
   | +1 :green_heart: |  compile  |  22m  0s |  |  the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  javac  |  22m  0s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/14/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1830 unchanged - 1 fixed = 1831 total (was 1831)  |
   | -1 :x: |  blanks  |   0m  0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/14/artifact/out/blanks-eol.txt) |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   4m 27s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/14/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 7 new + 184 unchanged - 4 fixed = 191 total (was 188)  |
   | +1 :green_heart: |  mvnsite  |   2m 48s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javadoc  |   0m 46s | [/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/14/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 80 unchanged - 8 fixed = 86 total (was 88)  |
   | -1 :x: |  spotbugs  |   1m 52s | [/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/14/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html) |  hadoop-tools/hadoop-aws generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  19m 32s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 14s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   3m 11s |  |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 229m 59s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:[line 158] |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:[line 117] |
   |  |  Unwritten field:NoopAuditManager.java:[line 110] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/14/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint xml |
   | uname | Linux c4b1b865cdb4 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b6bb91624184a69f27bcce64b52e034d7c937866 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/14/testReport/ |
   | Max. process+thread count | 3152 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/14/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus removed a comment on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-827887228


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 29s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/15/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-805151587


   its not merging and I've over-squashed things into the AWS metrics patch. Will need to unroll it


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] dongjoon-hyun commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
dongjoon-hyun commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-841888674


   Also, cc @sunchao 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-847810378


   backporting to branch-3.3 if the tests run successfully. Merge has gone in and first test run is happy.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-806832078


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 20s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/4/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] mehakmeet commented on a change in pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
mehakmeet commented on a change in pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#discussion_r636762939



##########
File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/TestAuditSpanLifecycle.java
##########
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.util.List;
+
+import com.amazonaws.handlers.RequestHandler2;
+import org.junit.Before;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.store.audit.AuditSpan;
+
+import static org.apache.hadoop.fs.s3a.audit.AuditTestSupport.noopAuditConfig;
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * Unit tests related to span lifecycle.
+ */
+public class TestAuditSpanLifecycle extends AbstractAuditingTest {
+
+  private Configuration conf;
+
+  private AuditSpan resetSpan;
+
+  @Before
+  public void setup() throws Exception {
+    super.setup();
+    resetSpan = getManager().getActiveAuditSpan();
+  }
+
+  protected Configuration createConfig() {
+    return noopAuditConfig();
+  }
+
+  /**
+   * Core lifecycle (remember: the service has already been started).
+   */
+  @Test
+  public void testStop() throws Throwable {
+    getManager().stop();
+  }
+
+  @Test
+  public void testCreateRequestHandlers() throws Throwable {
+    List<RequestHandler2> handlers
+        = getManager().createRequestHandlers();
+    assertThat(handlers).isNotEmpty();
+  }
+
+  @Test
+  public void testInitialSpanIsInvalid() throws Throwable {
+    assertThat(resetSpan)
+        .matches(f -> !f.isValidSpan(), "is invalid");
+  }
+
+  @Test
+  public void testCreateCloseSpan() throws Throwable {
+    AuditSpan span = getManager().createSpan("op", null, null);
+    assertThat(span)
+        .matches(AuditSpan::isValidSpan, "is valid");
+    assertActiveSpan(span);
+    // activation when already active is no-op
+    span.activate();
+    assertActiveSpan(span);
+    // close the span
+    span.close();
+    // the original span is restored.
+    assertActiveSpan(resetSpan);
+  }
+
+  @Test
+  public void testSpanActivation() throws Throwable {
+    // real activation switches spans in the current thead.

Review comment:
       typo: "thread"

##########
File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/TestAuditSpanLifecycle.java
##########
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.util.List;
+
+import com.amazonaws.handlers.RequestHandler2;
+import org.junit.Before;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.store.audit.AuditSpan;
+
+import static org.apache.hadoop.fs.s3a.audit.AuditTestSupport.noopAuditConfig;
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * Unit tests related to span lifecycle.
+ */
+public class TestAuditSpanLifecycle extends AbstractAuditingTest {
+
+  private Configuration conf;

Review comment:
       Never used.

##########
File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/AbstractAuditingTest.java
##########
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.IOException;
+import java.util.Map;
+
+import com.amazonaws.services.s3.model.GetObjectMetadataRequest;
+import org.junit.After;
+import org.junit.Before;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.s3a.Statistic;
+import org.apache.hadoop.fs.s3a.api.RequestFactory;
+import org.apache.hadoop.fs.s3a.impl.RequestFactoryImpl;
+import org.apache.hadoop.fs.statistics.IOStatisticAssertions;
+import org.apache.hadoop.fs.statistics.impl.IOStatisticsStore;
+import org.apache.hadoop.fs.store.audit.AuditSpan;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+
+import static org.apache.hadoop.fs.s3a.Statistic.INVOCATION_GET_FILE_STATUS;
+import static org.apache.hadoop.fs.s3a.audit.S3AAuditConstants.UNAUDITED_OPERATION;
+import static org.apache.hadoop.fs.s3a.audit.AuditTestSupport.createIOStatisticsStoreForAuditing;
+import static org.apache.hadoop.service.ServiceOperations.stopQuietly;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * Abstract class for auditor unit tests.
+ */
+public abstract class AbstractAuditingTest extends AbstractHadoopTestBase {
+
+  protected static final String OPERATION
+      = INVOCATION_GET_FILE_STATUS.getSymbol();
+
+  /**
+   * Logging.
+   */
+  private static final Logger LOG =
+      LoggerFactory.getLogger(AbstractAuditingTest.class);
+
+  public static final String PATH_1 = "/path1";
+
+  public static final String PATH_2 = "/path2";
+
+  /**
+   * Statistics store with the auditor counters wired up.
+   */
+  private final IOStatisticsStore ioStatistics =
+      createIOStatisticsStoreForAuditing();
+
+  private RequestFactory requestFactory;
+
+  private AuditManagerS3A manager;
+
+  @Before
+  public void setup() throws Exception {
+    requestFactory = RequestFactoryImpl.builder()
+        .withBucket("bucket")
+        .build();
+    manager = AuditIntegration.createAndStartAuditManager(
+        createConfig(),
+        ioStatistics);
+  }
+
+  /**
+   * Create config.
+   * @return config to use when creating a manager
+   */
+  protected abstract Configuration createConfig();
+
+  @After
+  public void teardown() {
+    stopQuietly(manager);
+  }
+
+  protected IOStatisticsStore getIOStatistics() {
+    return ioStatistics;
+  }
+
+  protected RequestFactory getRequestFactory() {
+    return requestFactory;
+  }
+
+  protected AuditManagerS3A getManager() {
+    return manager;
+  }
+
+  /**
+   * Assert that a specific span is active.
+   * This matches on the wrapped spans.
+   * @param span span to assert over.
+   */
+  protected void assertActiveSpan(final AuditSpan span) {
+    assertThat(activeSpan())
+        .isSameAs(span);
+  }
+
+  /**
+   * Assert a span is unbound/invalid.
+   * @param span span to assert over.
+   */
+  protected void assertUnbondedSpan(final AuditSpan span) {
+    assertThat(span.isValidSpan())
+        .describedAs("Validity of %s", span)
+        .isFalse();
+  }
+
+  protected AuditSpanS3A activeSpan() {
+    return manager.getActiveAuditSpan();
+  }
+
+  /**
+   * Create a head request and pass it through the manager's beforeExecution()
+   * callback.
+   * @return a processed request.
+   */
+  protected GetObjectMetadataRequest head() {
+    return manager.beforeExecution(
+        requestFactory.newGetObjectMetadataRequest("/"));
+  }
+
+  /**
+   * Assert a head request fails as there is no
+   * active span.
+   */
+  protected void assertHeadUnaudited() throws Exception {
+    intercept(AuditFailureException.class,
+        UNAUDITED_OPERATION, this::head);
+  }
+
+  /**
+   * Assert that the audit failure is of a given value.
+   * Returns the value to assist in chaining,
+   * @param expected expected value
+   * @return the expected value.
+   */
+  protected long verifyAuditFailureCount(
+      final long expected) {
+    return verifyCounter(Statistic.AUDIT_FAILURE, expected);
+  }
+
+  /**
+   * Assert that the audit execution count
+   * is of a given value.
+   * Returns the value to assist in chaining,
+   * @param expected expected value
+   * @return the expected value.
+   */
+  protected long verifyAuditExecutionCount(
+      final long expected) {
+    return verifyCounter(Statistic.AUDIT_REQUEST_EXECUTION, expected);
+  }
+
+  /**
+   * Assert that a statistic counter is of a given value.
+   * Returns the value to assist in chaining,
+   * @param statistic statistic to check
+   * @param expected expected value
+   * @return the expected value.
+   */
+  protected long verifyCounter(final Statistic statistic,

Review comment:
       can make this private.

##########
File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/ITestAuditAccessChecks.java
##########
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.permission.FsAction;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.security.AccessControlException;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.touch;
+import static org.apache.hadoop.fs.s3a.Statistic.AUDIT_ACCESS_CHECK_FAILURE;
+import static org.apache.hadoop.fs.s3a.Statistic.AUDIT_REQUEST_EXECUTION;
+import static org.apache.hadoop.fs.s3a.Statistic.INVOCATION_ACCESS;
+import static org.apache.hadoop.fs.s3a.Statistic.STORE_IO_REQUEST;
+import static org.apache.hadoop.fs.s3a.audit.S3AAuditConstants.AUDIT_SERVICE_CLASSNAME;
+import static org.apache.hadoop.fs.s3a.audit.AuditTestSupport.resetAuditOptions;
+import static org.apache.hadoop.fs.s3a.performance.OperationCost.FILE_STATUS_ALL_PROBES;
+import static org.apache.hadoop.fs.s3a.performance.OperationCost.FILE_STATUS_FILE_PROBE;
+import static org.apache.hadoop.fs.s3a.performance.OperationCost.ROOT_FILE_STATUS_PROBE;
+import static org.apache.hadoop.fs.statistics.IOStatisticsLogging.ioStatisticsToPrettyString;
+
+/**
+ * Test S3A FS Access permit/deny is passed through all the way to the
+ * auditor.
+ * Uses {@link AccessCheckingAuditor} to enable/disable access.
+ * There are not currently any contract tests for this; behaviour
+ * based on base FileSystem implementation.
+ */
+public class ITestAuditAccessChecks extends AbstractS3ACostTest {
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(ITestAuditAccessChecks.class);
+
+  private AccessCheckingAuditor auditor;
+
+  public ITestAuditAccessChecks() {
+    super(true);
+  }
+
+  @Override
+  public Configuration createConfiguration() {
+    Configuration conf = super.createConfiguration();
+    resetAuditOptions(conf);
+    conf.set(AUDIT_SERVICE_CLASSNAME, AccessCheckingAuditor.CLASS);
+    return conf;
+  }
+
+  @Override
+  public void setup() throws Exception {
+    super.setup();
+    auditor = (AccessCheckingAuditor) getFileSystem().getAuditor();
+  }
+
+  @Test
+  public void testFileAccessAllowed() throws Throwable {
+    describe("Enable checkaccess and verify it works with expected"
+        + " statitics");
+    auditor.setAccessAllowed(true);
+    Path path = methodPath();
+    S3AFileSystem fs = getFileSystem();
+    touch(fs, path);
+    verifyMetrics(
+        () -> access(fs, path),
+        with(INVOCATION_ACCESS, 1),
+        whenRaw(FILE_STATUS_FILE_PROBE));
+  }
+
+  private String access(final S3AFileSystem fs, final Path path)

Review comment:
       Maybe move this to the end of the test, after all ```@Test```.

##########
File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/ITestAuditManager.java
##########
@@ -0,0 +1,135 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.nio.file.AccessDeniedException;
+
+import org.assertj.core.api.Assertions;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.WriteOperationHelper;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+
+import static org.apache.hadoop.fs.s3a.Statistic.AUDIT_FAILURE;
+import static org.apache.hadoop.fs.s3a.Statistic.AUDIT_REQUEST_EXECUTION;
+import static org.apache.hadoop.fs.s3a.audit.AuditTestSupport.enableLoggingAuditor;
+import static org.apache.hadoop.fs.s3a.audit.AuditTestSupport.resetAuditOptions;
+import static org.apache.hadoop.fs.s3a.audit.S3AAuditConstants.AUDIT_REQUEST_HANDLERS;
+import static org.apache.hadoop.fs.s3a.audit.S3AAuditConstants.UNAUDITED_OPERATION;
+import static org.apache.hadoop.fs.statistics.IOStatisticAssertions.assertThatStatisticCounter;
+import static org.apache.hadoop.fs.statistics.IOStatisticAssertions.lookupCounterStatistic;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+/**
+ * Test S3A FS Access permit/deny is passed through all the way to the
+ * auditor.
+ * Uses {@link AccessCheckingAuditor} to enable/disable access.
+ * There are not currently any contract tests for this; behaviour
+ * based on base FileSystem implementation.
+ */
+public class ITestAuditManager extends AbstractS3ACostTest {
+
+  public ITestAuditManager() {
+    super(true);
+  }
+
+  @Override
+  public Configuration createConfiguration() {
+    Configuration conf = super.createConfiguration();
+    resetAuditOptions(conf);
+    enableLoggingAuditor(conf);
+    conf.set(AUDIT_REQUEST_HANDLERS,
+        SimpleAWSRequestHandler.CLASS);
+    return conf;
+  }
+
+  /**
+   * Get the FS IOStatistics.
+   * @return the FS live IOSTats.
+   */
+  private IOStatistics iostats() {
+    return getFileSystem().getIOStatistics();
+  }
+
+  /**
+   * Verify that operations outside a span are rejected
+   * by ensuring that the thread is outside a span, create
+   * a write operation helper, then
+   * reject it.
+   */
+  @Test
+  public void testInvokeOutOfSpanRejected() throws Throwable {
+    describe("Operations against S3 will be rejected outside of a span");
+    final S3AFileSystem fs = getFileSystem();
+    final long failures0 = lookupCounterStatistic(iostats(),
+        AUDIT_FAILURE.getSymbol());
+    final long exec0 = lookupCounterStatistic(iostats(),
+        AUDIT_REQUEST_EXECUTION.getSymbol());
+    // API call
+    // create and close a span, so the FS is not in a span.
+    fs.createSpan("span", null, null).close();
+
+    // this will be out of span
+    final WriteOperationHelper writer
+        = fs.getWriteOperationHelper();
+
+    // which can be verified
+    Assertions.assertThat(writer.getAuditSpan())
+        .matches(s -> !s.isValidSpan(), "Span is not valid");
+
+    // an S3 API call will fail and be mapped to access denial.
+    final AccessDeniedException ex = intercept(
+        AccessDeniedException.class, UNAUDITED_OPERATION, () ->
+            writer.listMultipartUploads("/"));
+
+    // verify the type of the inner cause, throwing the outer ex
+    // if it is null or a different class
+    if (!(ex.getCause() instanceof AuditFailureException)) {
+      throw ex;
+    }
+
+    assertThatStatisticCounter(iostats(), AUDIT_REQUEST_EXECUTION.getSymbol())
+        .isGreaterThan(exec0);
+    assertThatStatisticCounter(iostats(), AUDIT_FAILURE.getSymbol())
+        .isGreaterThan(failures0);
+  }
+
+  @Test
+  public void testRequestHandlerBinding() throws Throwable {
+    describe("Verify that extra request handlers can be added and that they"
+        + " will be invoked during request execution");
+    final long baseCount = SimpleAWSRequestHandler.getInvocationCount();
+    final S3AFileSystem fs = getFileSystem();
+    final long exec0 = lookupCounterStatistic(iostats(),
+        AUDIT_REQUEST_EXECUTION.getSymbol());
+    // API call
+    fs.getBucketLocation();
+    // which MUST have ended up calling the extension request handler
+    Assertions.assertThat(SimpleAWSRequestHandler.getInvocationCount())
+        .describedAs("Invocatin count of plugged in request handler")

Review comment:
       typo: "invocation"

##########
File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/TestHttpReferrerAuditHeader.java
##########
@@ -0,0 +1,317 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.net.URISyntaxException;
+import java.util.Map;
+import java.util.regex.Matcher;
+
+import com.amazonaws.services.s3.model.GetObjectMetadataRequest;
+import org.junit.Before;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.s3a.audit.impl.LoggingAuditor;
+import org.apache.hadoop.fs.store.audit.AuditSpan;
+import org.apache.hadoop.fs.audit.CommonAuditContext;
+import org.apache.hadoop.fs.store.audit.HttpReferrerAuditHeader;
+import org.apache.hadoop.security.UserGroupInformation;
+
+import static org.apache.hadoop.fs.s3a.audit.AuditTestSupport.loggingAuditConfig;
+import static org.apache.hadoop.fs.s3a.audit.S3AAuditConstants.REFERRER_HEADER_FILTER;
+import static org.apache.hadoop.fs.s3a.audit.S3LogParser.*;
+import static org.apache.hadoop.fs.s3a.impl.HeaderProcessing.HEADER_REFERRER;
+import static org.apache.hadoop.fs.store.audit.HttpReferrerAuditHeader.maybeStripWrappedQuotes;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_FILESYSTEM_ID;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_ID;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_OP;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_PATH;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_PATH2;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_PRINCIPAL;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_THREAD0;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_THREAD1;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_TIMESTAMP;
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * Tests for referrer audit header generation/parsing.
+ */
+public class TestHttpReferrerAuditHeader extends AbstractAuditingTest {
+
+  /**
+   * Logging.
+   */
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TestHttpReferrerAuditHeader.class);
+
+  private LoggingAuditor auditor;
+
+  @Before
+  public void setup() throws Exception {
+    super.setup();
+
+    auditor = (LoggingAuditor) getManager().getAuditor();
+  }
+
+  /**
+   * Creaate the config from {@link AuditTestSupport#loggingAuditConfig()}
+   * and patch in filtering for fields x1, x2, x3.
+   * @return a logging configuration.
+   */
+  protected Configuration createConfig() {
+    final Configuration conf = loggingAuditConfig();
+    conf.set(REFERRER_HEADER_FILTER, "x1, x2, x3");
+    return conf;
+  }
+
+  /**
+   * This verifies that passing a request through the audit manager
+   * causes the http referrer header to be added, that it can
+   * be split to query parameters, and that those parameters match
+   * those of the active wrapped span.
+   */
+  @Test
+  public void testHttpReferrerPatchesTheRequest() throws Throwable {
+    AuditSpan span = span();
+    long ts = span.getTimestamp();
+    GetObjectMetadataRequest request = head();
+    Map<String, String> headers
+        = request.getCustomRequestHeaders();
+    assertThat(headers)
+        .describedAs("Custom headers")
+        .containsKey(HEADER_REFERRER);
+    String header = headers.get(HEADER_REFERRER);
+    LOG.info("Header is {}", header);
+    Map<String, String> params
+        = HttpReferrerAuditHeader.extractQueryParameters(header);
+    assertMapContains(params, PARAM_PRINCIPAL,
+        UserGroupInformation.getCurrentUser().getUserName());
+    assertMapContains(params, PARAM_FILESYSTEM_ID, auditor.getAuditorId());
+    assertMapContains(params, PARAM_OP, OPERATION);
+    assertMapContains(params, PARAM_PATH, PATH_1);
+    assertMapContains(params, PARAM_PATH2, PATH_2);
+    String threadID = CommonAuditContext.currentThreadID();
+    assertMapContains(params, PARAM_THREAD0, threadID);
+    assertMapContains(params, PARAM_THREAD1, threadID);
+    assertMapContains(params, PARAM_ID, span.getSpanId());
+    assertThat(span.getTimestamp())
+        .describedAs("Timestamp of " + span)
+        .isEqualTo(ts);
+
+    assertMapContains(params, PARAM_TIMESTAMP,
+        Long.toString(ts));
+  }
+
+  @Test
+  public void testHeaderComplexPaths() throws Throwable {
+    String p1 = "s3a://dotted.bucket/path: value/subdir";
+    String p2 = "s3a://key/";
+    AuditSpan span = getManager().createSpan(OPERATION, p1, p2);
+    long ts = span.getTimestamp();
+    Map<String, String> params = issueRequestAndExtractParameters();
+    assertMapContains(params, PARAM_PRINCIPAL,
+        UserGroupInformation.getCurrentUser().getUserName());
+    assertMapContains(params, PARAM_FILESYSTEM_ID, auditor.getAuditorId());
+    assertMapContains(params, PARAM_OP, OPERATION);
+    assertMapContains(params, PARAM_PATH, p1);
+    assertMapContains(params, PARAM_PATH2, p2);
+    String threadID = CommonAuditContext.currentThreadID();
+    assertMapContains(params, PARAM_THREAD0, threadID);
+    assertMapContains(params, PARAM_THREAD1, threadID);
+    assertMapContains(params, PARAM_ID, span.getSpanId());
+    assertThat(span.getTimestamp())
+        .describedAs("Timestamp of " + span)
+        .isEqualTo(ts);
+
+    assertMapContains(params, PARAM_TIMESTAMP,
+        Long.toString(ts));
+  }
+
+  /**
+   * Issue a request, then get the header field and parse it to the parameter.
+   * @return map of query params on the referrer header.
+   * @throws URISyntaxException failure to parse the header as a URI.
+   */
+  private Map<String, String> issueRequestAndExtractParameters()
+      throws URISyntaxException {
+    head();
+    return HttpReferrerAuditHeader.extractQueryParameters(
+        auditor.getLastHeader());
+  }
+
+
+  /**
+   * Test that headers are filtered out if configured.
+   */
+  @Test
+  public void testHeaderFiltering() throws Throwable {
+    // add two attributes, x2 will be filtered.
+    AuditSpan span = getManager().createSpan(OPERATION, null, null);
+    auditor.addAttribute("x0", "x0");
+    auditor.addAttribute("x2", "x2");
+    final Map<String, String> params
+        = issueRequestAndExtractParameters();
+    assertThat(params)
+        .doesNotContainKey("x2");
+
+  }
+
+  /**
+   * A real log entry.
+   * This is derived from a real log entry on a test run.
+   * If this needs to be updated, please do it from a real log.
+   * Splitting this up across lines has a tendency to break things, so
+   * be careful making changes.
+   */
+  public static final String SAMPLE_LOG_ENTRY =
+      "183c9826b45486e485693808f38e2c4071004bf5dfd4c3ab210f0a21a4000000"
+          + " bucket-london"
+          + " [13/May/2021:11:26:06 +0000]"
+          + " 109.157.171.174"
+          + " arn:aws:iam::152813717700:user/dev"
+          + " M7ZB7C4RTKXJKTM9"
+          + " REST.PUT.OBJECT"
+          + " fork-0001/test/testParseBrokenCSVFile"
+          + " \"PUT /fork-0001/test/testParseBrokenCSVFile HTTP/1.1\""
+          + " 200"
+          + " -"
+          + " -"
+          + " 794"
+          + " 55"
+          + " 17"
+          + " \"https://audit.example.org/op_create/"
+          + "e8ede3c7-8506-4a43-8268-fe8fcbb510a4-00000278/"
+          + "?op=op_create"
+          + "&p1=fork-0001/test/testParseBrokenCSVFile"
+          + "&pr=alice"
+          + "&ps=2eac5a04-2153-48db-896a-09bc9a2fd132"
+          + "&id=e8ede3c7-8506-4a43-8268-fe8fcbb510a4-00000278&t0=154"
+          + "&fs=e8ede3c7-8506-4a43-8268-fe8fcbb510a4&t1=156&"
+          + "ts=1620905165700\""
+          + " \"Hadoop 3.4.0-SNAPSHOT, java/1.8.0_282 vendor/AdoptOpenJDK\""
+          + " -"
+          + " TrIqtEYGWAwvu0h1N9WJKyoqM0TyHUaY+ZZBwP2yNf2qQp1Z/0="
+          + " SigV4"
+          + " ECDHE-RSA-AES128-GCM-SHA256"
+          + " AuthHeader"
+          + " bucket-london.s3.eu-west-2.amazonaws.com"
+          + " TLSv1.2";
+
+  private static final String DESCRIPTION = String.format(
+      "log entry %s split by %s", SAMPLE_LOG_ENTRY,
+      LOG_ENTRY_PATTERN);
+
+  /**
+   * Match the log entry and validate the results.
+   */
+  @Test
+  public void testMatchAWSLogEntry() throws Throwable {
+
+    LOG.info("Matcher pattern is\n'{}'", LOG_ENTRY_PATTERN);
+    LOG.info("Log entry is\n'{}'", SAMPLE_LOG_ENTRY);
+    final Matcher matcher = LOG_ENTRY_PATTERN.matcher(SAMPLE_LOG_ENTRY);
+
+    // match the pattern against the entire log entry.
+    assertThat(matcher.matches())

Review comment:
       Don't think we'll see the error message if this assert fails, better to use ```assertTrue(msg, matcher.matches())```

##########
File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/TestAuditSpanLifecycle.java
##########
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.util.List;
+
+import com.amazonaws.handlers.RequestHandler2;
+import org.junit.Before;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.store.audit.AuditSpan;
+
+import static org.apache.hadoop.fs.s3a.audit.AuditTestSupport.noopAuditConfig;
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * Unit tests related to span lifecycle.
+ */
+public class TestAuditSpanLifecycle extends AbstractAuditingTest {
+
+  private Configuration conf;
+
+  private AuditSpan resetSpan;
+
+  @Before
+  public void setup() throws Exception {
+    super.setup();
+    resetSpan = getManager().getActiveAuditSpan();
+  }
+
+  protected Configuration createConfig() {
+    return noopAuditConfig();
+  }
+
+  /**
+   * Core lifecycle (remember: the service has already been started).
+   */
+  @Test
+  public void testStop() throws Throwable {
+    getManager().stop();
+  }
+
+  @Test
+  public void testCreateRequestHandlers() throws Throwable {
+    List<RequestHandler2> handlers
+        = getManager().createRequestHandlers();
+    assertThat(handlers).isNotEmpty();
+  }
+
+  @Test
+  public void testInitialSpanIsInvalid() throws Throwable {
+    assertThat(resetSpan)
+        .matches(f -> !f.isValidSpan(), "is invalid");
+  }
+
+  @Test
+  public void testCreateCloseSpan() throws Throwable {
+    AuditSpan span = getManager().createSpan("op", null, null);
+    assertThat(span)
+        .matches(AuditSpan::isValidSpan, "is valid");
+    assertActiveSpan(span);
+    // activation when already active is no-op
+    span.activate();
+    assertActiveSpan(span);
+    // close the span
+    span.close();
+    // the original span is restored.
+    assertActiveSpan(resetSpan);
+  }
+
+  @Test
+  public void testSpanActivation() throws Throwable {
+    // real activation switches spans in the current thead.
+
+    AuditSpan span1 = getManager().createSpan("op1", null, null);
+    AuditSpan span2 = getManager().createSpan("op2", null, null);
+    assertActiveSpan(span2);
+    // switch back to span 1
+    span1.activate();
+    assertActiveSpan(span1);
+    // then to span 2
+    span2.activate();
+    assertActiveSpan(span2);
+    span2.close();
+

Review comment:
       should we close span1 here? Maybe some assertion regarding span1's lifecycle after span2 was closed?

##########
File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/ITestAuditAccessChecks.java
##########
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.permission.FsAction;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.security.AccessControlException;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.touch;
+import static org.apache.hadoop.fs.s3a.Statistic.AUDIT_ACCESS_CHECK_FAILURE;
+import static org.apache.hadoop.fs.s3a.Statistic.AUDIT_REQUEST_EXECUTION;
+import static org.apache.hadoop.fs.s3a.Statistic.INVOCATION_ACCESS;
+import static org.apache.hadoop.fs.s3a.Statistic.STORE_IO_REQUEST;
+import static org.apache.hadoop.fs.s3a.audit.S3AAuditConstants.AUDIT_SERVICE_CLASSNAME;
+import static org.apache.hadoop.fs.s3a.audit.AuditTestSupport.resetAuditOptions;
+import static org.apache.hadoop.fs.s3a.performance.OperationCost.FILE_STATUS_ALL_PROBES;
+import static org.apache.hadoop.fs.s3a.performance.OperationCost.FILE_STATUS_FILE_PROBE;
+import static org.apache.hadoop.fs.s3a.performance.OperationCost.ROOT_FILE_STATUS_PROBE;
+import static org.apache.hadoop.fs.statistics.IOStatisticsLogging.ioStatisticsToPrettyString;
+
+/**
+ * Test S3A FS Access permit/deny is passed through all the way to the
+ * auditor.
+ * Uses {@link AccessCheckingAuditor} to enable/disable access.
+ * There are not currently any contract tests for this; behaviour
+ * based on base FileSystem implementation.
+ */
+public class ITestAuditAccessChecks extends AbstractS3ACostTest {
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(ITestAuditAccessChecks.class);
+
+  private AccessCheckingAuditor auditor;
+
+  public ITestAuditAccessChecks() {
+    super(true);
+  }
+
+  @Override
+  public Configuration createConfiguration() {
+    Configuration conf = super.createConfiguration();
+    resetAuditOptions(conf);
+    conf.set(AUDIT_SERVICE_CLASSNAME, AccessCheckingAuditor.CLASS);
+    return conf;
+  }
+
+  @Override
+  public void setup() throws Exception {
+    super.setup();
+    auditor = (AccessCheckingAuditor) getFileSystem().getAuditor();
+  }
+
+  @Test
+  public void testFileAccessAllowed() throws Throwable {
+    describe("Enable checkaccess and verify it works with expected"
+        + " statitics");

Review comment:
       typo: "statistics"

##########
File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/TestHttpReferrerAuditHeader.java
##########
@@ -0,0 +1,317 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.net.URISyntaxException;
+import java.util.Map;
+import java.util.regex.Matcher;
+
+import com.amazonaws.services.s3.model.GetObjectMetadataRequest;
+import org.junit.Before;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.s3a.audit.impl.LoggingAuditor;
+import org.apache.hadoop.fs.store.audit.AuditSpan;
+import org.apache.hadoop.fs.audit.CommonAuditContext;
+import org.apache.hadoop.fs.store.audit.HttpReferrerAuditHeader;
+import org.apache.hadoop.security.UserGroupInformation;
+
+import static org.apache.hadoop.fs.s3a.audit.AuditTestSupport.loggingAuditConfig;
+import static org.apache.hadoop.fs.s3a.audit.S3AAuditConstants.REFERRER_HEADER_FILTER;
+import static org.apache.hadoop.fs.s3a.audit.S3LogParser.*;
+import static org.apache.hadoop.fs.s3a.impl.HeaderProcessing.HEADER_REFERRER;
+import static org.apache.hadoop.fs.store.audit.HttpReferrerAuditHeader.maybeStripWrappedQuotes;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_FILESYSTEM_ID;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_ID;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_OP;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_PATH;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_PATH2;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_PRINCIPAL;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_THREAD0;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_THREAD1;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_TIMESTAMP;
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * Tests for referrer audit header generation/parsing.
+ */
+public class TestHttpReferrerAuditHeader extends AbstractAuditingTest {
+
+  /**
+   * Logging.
+   */
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TestHttpReferrerAuditHeader.class);
+
+  private LoggingAuditor auditor;
+
+  @Before
+  public void setup() throws Exception {
+    super.setup();
+
+    auditor = (LoggingAuditor) getManager().getAuditor();
+  }
+
+  /**
+   * Creaate the config from {@link AuditTestSupport#loggingAuditConfig()}
+   * and patch in filtering for fields x1, x2, x3.
+   * @return a logging configuration.
+   */
+  protected Configuration createConfig() {
+    final Configuration conf = loggingAuditConfig();
+    conf.set(REFERRER_HEADER_FILTER, "x1, x2, x3");
+    return conf;
+  }
+
+  /**
+   * This verifies that passing a request through the audit manager
+   * causes the http referrer header to be added, that it can
+   * be split to query parameters, and that those parameters match
+   * those of the active wrapped span.
+   */
+  @Test
+  public void testHttpReferrerPatchesTheRequest() throws Throwable {
+    AuditSpan span = span();
+    long ts = span.getTimestamp();
+    GetObjectMetadataRequest request = head();
+    Map<String, String> headers
+        = request.getCustomRequestHeaders();
+    assertThat(headers)
+        .describedAs("Custom headers")
+        .containsKey(HEADER_REFERRER);
+    String header = headers.get(HEADER_REFERRER);
+    LOG.info("Header is {}", header);
+    Map<String, String> params
+        = HttpReferrerAuditHeader.extractQueryParameters(header);
+    assertMapContains(params, PARAM_PRINCIPAL,
+        UserGroupInformation.getCurrentUser().getUserName());
+    assertMapContains(params, PARAM_FILESYSTEM_ID, auditor.getAuditorId());
+    assertMapContains(params, PARAM_OP, OPERATION);
+    assertMapContains(params, PARAM_PATH, PATH_1);
+    assertMapContains(params, PARAM_PATH2, PATH_2);
+    String threadID = CommonAuditContext.currentThreadID();
+    assertMapContains(params, PARAM_THREAD0, threadID);
+    assertMapContains(params, PARAM_THREAD1, threadID);
+    assertMapContains(params, PARAM_ID, span.getSpanId());
+    assertThat(span.getTimestamp())
+        .describedAs("Timestamp of " + span)
+        .isEqualTo(ts);
+
+    assertMapContains(params, PARAM_TIMESTAMP,
+        Long.toString(ts));
+  }
+
+  @Test
+  public void testHeaderComplexPaths() throws Throwable {

Review comment:
       javadoc for this test would be helpful.

##########
File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/ITestAuditManager.java
##########
@@ -0,0 +1,135 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.nio.file.AccessDeniedException;
+
+import org.assertj.core.api.Assertions;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.WriteOperationHelper;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+
+import static org.apache.hadoop.fs.s3a.Statistic.AUDIT_FAILURE;
+import static org.apache.hadoop.fs.s3a.Statistic.AUDIT_REQUEST_EXECUTION;
+import static org.apache.hadoop.fs.s3a.audit.AuditTestSupport.enableLoggingAuditor;
+import static org.apache.hadoop.fs.s3a.audit.AuditTestSupport.resetAuditOptions;
+import static org.apache.hadoop.fs.s3a.audit.S3AAuditConstants.AUDIT_REQUEST_HANDLERS;
+import static org.apache.hadoop.fs.s3a.audit.S3AAuditConstants.UNAUDITED_OPERATION;
+import static org.apache.hadoop.fs.statistics.IOStatisticAssertions.assertThatStatisticCounter;
+import static org.apache.hadoop.fs.statistics.IOStatisticAssertions.lookupCounterStatistic;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+/**
+ * Test S3A FS Access permit/deny is passed through all the way to the

Review comment:
       JavaDocs doesn't seem right for this test.

##########
File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
##########
@@ -1214,21 +1359,76 @@ private FSDataInputStream open(
           fileStatus,
           policy,
           changeDetectionPolicy,
-          readAheadRange2);
+          readAheadRange2,
+          auditSpan);
     } else {
       readContext = createReadContext(
           fileStatus,
           inputPolicy,
           changeDetectionPolicy,
-          readAhead);
+          readAhead,
+          auditSpan);
     }
     LOG.debug("Opening '{}'", readContext);
 
     return new FSDataInputStream(
         new S3AInputStream(
             readContext,
             createObjectAttributes(fileStatus),
-            s3));
+            createInputStreamCallbacks(auditSpan)));
+  }
+
+  /**
+   * Overrride point: create the callbacks for S3AInputStream.

Review comment:
       typo: "Override"

##########
File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/TestHttpReferrerAuditHeader.java
##########
@@ -0,0 +1,317 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.net.URISyntaxException;
+import java.util.Map;
+import java.util.regex.Matcher;
+
+import com.amazonaws.services.s3.model.GetObjectMetadataRequest;
+import org.junit.Before;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.s3a.audit.impl.LoggingAuditor;
+import org.apache.hadoop.fs.store.audit.AuditSpan;
+import org.apache.hadoop.fs.audit.CommonAuditContext;
+import org.apache.hadoop.fs.store.audit.HttpReferrerAuditHeader;
+import org.apache.hadoop.security.UserGroupInformation;
+
+import static org.apache.hadoop.fs.s3a.audit.AuditTestSupport.loggingAuditConfig;
+import static org.apache.hadoop.fs.s3a.audit.S3AAuditConstants.REFERRER_HEADER_FILTER;
+import static org.apache.hadoop.fs.s3a.audit.S3LogParser.*;
+import static org.apache.hadoop.fs.s3a.impl.HeaderProcessing.HEADER_REFERRER;
+import static org.apache.hadoop.fs.store.audit.HttpReferrerAuditHeader.maybeStripWrappedQuotes;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_FILESYSTEM_ID;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_ID;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_OP;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_PATH;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_PATH2;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_PRINCIPAL;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_THREAD0;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_THREAD1;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_TIMESTAMP;
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * Tests for referrer audit header generation/parsing.
+ */
+public class TestHttpReferrerAuditHeader extends AbstractAuditingTest {
+
+  /**
+   * Logging.
+   */
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TestHttpReferrerAuditHeader.class);
+
+  private LoggingAuditor auditor;
+
+  @Before
+  public void setup() throws Exception {
+    super.setup();
+
+    auditor = (LoggingAuditor) getManager().getAuditor();
+  }
+
+  /**
+   * Creaate the config from {@link AuditTestSupport#loggingAuditConfig()}

Review comment:
       typo: "Create"




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-809710473


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 20s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/5/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-841228793


   BTW, terasort tests show that the committers are passing in job IDs in MR job
   ```
   183c9826b45486e485693808f38e2c4071004bf5dfd4c3ab210f0a21a4235ef8 stevel-london [13/May/2021:13:16:48 +0000] 109.157.171.170 arn:aws:iam::152813717728:user/stevel-dev R4THYV1DGS4DASS2 REST.GET.BUCKET - "GET /?list-type=2&max-keys=5000&prefix=terasort-magic%2Fsortin%2F__magic%2Fjob-job_1620911577786_0004%2Ftasks%2Fattempt_1620911577786_0004_m_000001_0%2F&fetch-owner=false HTTP/1.1" 200 - 982 - 13 12 "https://audit.example.org/op_delete/7e459c19-f1fe-4713-9788-35d77206f9cc-00000012/?op=op_delete&p1=s3a://stevel-london/terasort-magic/sortin/__magic/job-job_1620911577786_0004/tasks/attempt_1620911577786_0004_m_000001_0&pr=stevel&ps=9866ff56-a50d-4744-8f38-7b9d29942e95&id=7e459c19-f1fe-4713-9788-35d77206f9cc-00000012&t0=1&fs=7e459c19-f1fe-4713-9788-35d77206f9cc&t1=44&ji=job_1620911577786_0004&ts=1620911808104" "Hadoop 3.4.0-SNAPSHOT, aws-sdk-java/1.11.901 Mac_OS_X/10.16 OpenJDK_64-Bit_Server_VM/25.282-b08 java/1.8.0_282 vendor/AdoptOpenJDK" - AW4uMFNGBVw+RZyNWphOgrVA27e1wQx7Fkg2/3+yGf4p
 R2lRvEac4NA3UXAEqhSEPs3J8bBG0r0= SigV4 ECDHE-RSA-AES128-GCM-SHA256 AuthHeader stevel-london.s3.eu-west-2.amazonaws.com TLSv1.2
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-811485602


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 42 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  4s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 35s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  17m 54s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 43s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  1s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  20m  1s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 2 new + 2034 unchanged - 1 fixed = 2036 total (was 2035)  |
   | +1 :green_heart: |  compile  |  18m  0s |  |  the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  javac  |  18m  0s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1929 unchanged - 1 fixed = 1930 total (was 1930)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   3m 46s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 41 new + 185 unchanged - 4 fixed = 226 total (was 189)  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javadoc  |   0m 44s | [/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 80 unchanged - 8 fixed = 86 total (was 88)  |
   | -1 :x: |  spotbugs  |   1m 34s | [/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html) |  hadoop-tools/hadoop-aws generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  14m 53s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 26s |  |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |   2m 32s | [/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt) |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 196m 31s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:[line 158] |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:[line 117] |
   |  |  Unwritten field:NoopAuditManager.java:[line 110] |
   | Failed junit tests | hadoop.fs.s3a.audit.TestLoggingAuditor |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint xml |
   | uname | Linux 2033a173c143 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5cd2126f0303cc455a52bac2202955b3bf721a81 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/testReport/ |
   | Max. process+thread count | 1279 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-843222945


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 18s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/25/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-840782115


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 17s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/21/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-825791293


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 43 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 36s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 25s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  17m 53s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 42s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 27s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 56s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  19m 56s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/13/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 1 new + 1936 unchanged - 1 fixed = 1937 total (was 1937)  |
   | +1 :green_heart: |  compile  |  17m 57s |  |  the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  javac  |  17m 57s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/13/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1831 unchanged - 1 fixed = 1832 total (was 1832)  |
   | -1 :x: |  blanks  |   0m  0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/13/artifact/out/blanks-eol.txt) |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 41s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/13/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 7 new + 185 unchanged - 4 fixed = 192 total (was 189)  |
   | +1 :green_heart: |  mvnsite  |   2m 25s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javadoc  |   0m 45s | [/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/13/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 80 unchanged - 8 fixed = 86 total (was 88)  |
   | -1 :x: |  spotbugs  |   1m 33s | [/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/13/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html) |  hadoop-tools/hadoop-aws generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  15m  1s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 23s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 12s |  |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 196m 22s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:[line 158] |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:[line 117] |
   |  |  Unwritten field:NoopAuditManager.java:[line 110] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/13/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint xml |
   | uname | Linux c48af188307e 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1c7c9abed2efc53b641ee2feae96b455904660c9 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/13/testReport/ |
   | Max. process+thread count | 1258 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/13/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on a change in pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran commented on a change in pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#discussion_r637867260



##########
File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/ITestAuditAccessChecks.java
##########
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.permission.FsAction;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.security.AccessControlException;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.touch;
+import static org.apache.hadoop.fs.s3a.Statistic.AUDIT_ACCESS_CHECK_FAILURE;
+import static org.apache.hadoop.fs.s3a.Statistic.AUDIT_REQUEST_EXECUTION;
+import static org.apache.hadoop.fs.s3a.Statistic.INVOCATION_ACCESS;
+import static org.apache.hadoop.fs.s3a.Statistic.STORE_IO_REQUEST;
+import static org.apache.hadoop.fs.s3a.audit.S3AAuditConstants.AUDIT_SERVICE_CLASSNAME;
+import static org.apache.hadoop.fs.s3a.audit.AuditTestSupport.resetAuditOptions;
+import static org.apache.hadoop.fs.s3a.performance.OperationCost.FILE_STATUS_ALL_PROBES;
+import static org.apache.hadoop.fs.s3a.performance.OperationCost.FILE_STATUS_FILE_PROBE;
+import static org.apache.hadoop.fs.s3a.performance.OperationCost.ROOT_FILE_STATUS_PROBE;
+import static org.apache.hadoop.fs.statistics.IOStatisticsLogging.ioStatisticsToPrettyString;
+
+/**
+ * Test S3A FS Access permit/deny is passed through all the way to the
+ * auditor.
+ * Uses {@link AccessCheckingAuditor} to enable/disable access.
+ * There are not currently any contract tests for this; behaviour
+ * based on base FileSystem implementation.
+ */
+public class ITestAuditAccessChecks extends AbstractS3ACostTest {
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(ITestAuditAccessChecks.class);
+
+  private AccessCheckingAuditor auditor;
+
+  public ITestAuditAccessChecks() {
+    super(true);
+  }
+
+  @Override
+  public Configuration createConfiguration() {
+    Configuration conf = super.createConfiguration();
+    resetAuditOptions(conf);
+    conf.set(AUDIT_SERVICE_CLASSNAME, AccessCheckingAuditor.CLASS);
+    return conf;
+  }
+
+  @Override
+  public void setup() throws Exception {
+    super.setup();
+    auditor = (AccessCheckingAuditor) getFileSystem().getAuditor();
+  }
+
+  @Test
+  public void testFileAccessAllowed() throws Throwable {
+    describe("Enable checkaccess and verify it works with expected"
+        + " statitics");
+    auditor.setAccessAllowed(true);
+    Path path = methodPath();
+    S3AFileSystem fs = getFileSystem();
+    touch(fs, path);
+    verifyMetrics(
+        () -> access(fs, path),
+        with(INVOCATION_ACCESS, 1),
+        whenRaw(FILE_STATUS_FILE_PROBE));
+  }
+
+  private String access(final S3AFileSystem fs, final Path path)

Review comment:
       done; added javadocs too.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-847222690


   not sure what is up with yetus there. Submitted again, with some updated docs


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-828721301


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 21s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/17/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-847092269


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 44 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 27s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 30s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m  9s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   4m  3s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m  8s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 40s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 45s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 15s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |  23m 15s | [/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/26/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) |  root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 3 new + 1995 unchanged - 3 fixed = 1998 total (was 1998)  |
   | +1 :green_heart: |  compile  |  20m 49s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  javac  |  20m 49s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/26/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) |  root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 3 new + 1871 unchanged - 3 fixed = 1874 total (was 1874)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   3m 58s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/26/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 5 new + 188 unchanged - 5 fixed = 193 total (was 193)  |
   | +1 :green_heart: |  mvnsite  |   2m 30s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  hadoop-common in the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 0 new + 63 unchanged - 25 fixed = 63 total (was 88)  |
   | +1 :green_heart: |  spotbugs  |   4m 30s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m  7s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 31s |  |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |   2m 37s | [/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/26/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt) |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 213m 16s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.fs.s3a.audit.TestHttpReferrerAuditHeader |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/26/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint xml |
   | uname | Linux 45509ec1e952 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 14d944bc6fe8b0bc94e72be2d9f73a001061d187 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/26/testReport/ |
   | Max. process+thread count | 2004 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/26/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-842351843


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 19s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/24/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran merged pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran merged pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on a change in pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran commented on a change in pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#discussion_r637815424



##########
File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/AbstractAuditingTest.java
##########
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.IOException;
+import java.util.Map;
+
+import com.amazonaws.services.s3.model.GetObjectMetadataRequest;
+import org.junit.After;
+import org.junit.Before;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.s3a.Statistic;
+import org.apache.hadoop.fs.s3a.api.RequestFactory;
+import org.apache.hadoop.fs.s3a.impl.RequestFactoryImpl;
+import org.apache.hadoop.fs.statistics.IOStatisticAssertions;
+import org.apache.hadoop.fs.statistics.impl.IOStatisticsStore;
+import org.apache.hadoop.fs.store.audit.AuditSpan;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+
+import static org.apache.hadoop.fs.s3a.Statistic.INVOCATION_GET_FILE_STATUS;
+import static org.apache.hadoop.fs.s3a.audit.S3AAuditConstants.UNAUDITED_OPERATION;
+import static org.apache.hadoop.fs.s3a.audit.AuditTestSupport.createIOStatisticsStoreForAuditing;
+import static org.apache.hadoop.service.ServiceOperations.stopQuietly;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * Abstract class for auditor unit tests.
+ */
+public abstract class AbstractAuditingTest extends AbstractHadoopTestBase {
+
+  protected static final String OPERATION
+      = INVOCATION_GET_FILE_STATUS.getSymbol();
+
+  /**
+   * Logging.
+   */
+  private static final Logger LOG =
+      LoggerFactory.getLogger(AbstractAuditingTest.class);
+
+  public static final String PATH_1 = "/path1";
+
+  public static final String PATH_2 = "/path2";
+
+  /**
+   * Statistics store with the auditor counters wired up.
+   */
+  private final IOStatisticsStore ioStatistics =
+      createIOStatisticsStoreForAuditing();
+
+  private RequestFactory requestFactory;
+
+  private AuditManagerS3A manager;
+
+  @Before
+  public void setup() throws Exception {
+    requestFactory = RequestFactoryImpl.builder()
+        .withBucket("bucket")
+        .build();
+    manager = AuditIntegration.createAndStartAuditManager(
+        createConfig(),
+        ioStatistics);
+  }
+
+  /**
+   * Create config.
+   * @return config to use when creating a manager
+   */
+  protected abstract Configuration createConfig();
+
+  @After
+  public void teardown() {
+    stopQuietly(manager);
+  }
+
+  protected IOStatisticsStore getIOStatistics() {
+    return ioStatistics;
+  }
+
+  protected RequestFactory getRequestFactory() {
+    return requestFactory;
+  }
+
+  protected AuditManagerS3A getManager() {
+    return manager;
+  }
+
+  /**
+   * Assert that a specific span is active.
+   * This matches on the wrapped spans.
+   * @param span span to assert over.
+   */
+  protected void assertActiveSpan(final AuditSpan span) {
+    assertThat(activeSpan())
+        .isSameAs(span);
+  }
+
+  /**
+   * Assert a span is unbound/invalid.
+   * @param span span to assert over.
+   */
+  protected void assertUnbondedSpan(final AuditSpan span) {
+    assertThat(span.isValidSpan())
+        .describedAs("Validity of %s", span)
+        .isFalse();
+  }
+
+  protected AuditSpanS3A activeSpan() {
+    return manager.getActiveAuditSpan();
+  }
+
+  /**
+   * Create a head request and pass it through the manager's beforeExecution()
+   * callback.
+   * @return a processed request.
+   */
+  protected GetObjectMetadataRequest head() {
+    return manager.beforeExecution(
+        requestFactory.newGetObjectMetadataRequest("/"));
+  }
+
+  /**
+   * Assert a head request fails as there is no
+   * active span.
+   */
+  protected void assertHeadUnaudited() throws Exception {
+    intercept(AuditFailureException.class,
+        UNAUDITED_OPERATION, this::head);
+  }
+
+  /**
+   * Assert that the audit failure is of a given value.
+   * Returns the value to assist in chaining,
+   * @param expected expected value
+   * @return the expected value.
+   */
+  protected long verifyAuditFailureCount(
+      final long expected) {
+    return verifyCounter(Statistic.AUDIT_FAILURE, expected);
+  }
+
+  /**
+   * Assert that the audit execution count
+   * is of a given value.
+   * Returns the value to assist in chaining,
+   * @param expected expected value
+   * @return the expected value.
+   */
+  protected long verifyAuditExecutionCount(
+      final long expected) {
+    return verifyCounter(Statistic.AUDIT_REQUEST_EXECUTION, expected);
+  }
+
+  /**
+   * Assert that a statistic counter is of a given value.
+   * Returns the value to assist in chaining,
+   * @param statistic statistic to check
+   * @param expected expected value
+   * @return the expected value.
+   */
+  protected long verifyCounter(final Statistic statistic,

Review comment:
       leaving protected in case a subclass wants to use it




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-847009025


   test run all good, getting a bit slow (tombstones?)
   
   ```
   [INFO]
   [WARNING] Tests run: 151, Failures: 0, Errors: 0, Skipped: 17
   [INFO]
   [INFO] ------------------------------------------------------------------------
   [INFO] BUILD SUCCESS
   [INFO] ------------------------------------------------------------------------
   [INFO] Total time:  37:52 min (Wall Clock)
   [INFO] Finished at: 2021-05-24T12:48:00+01:00
   [INFO] ------------------------------------------------------------------------
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-811277712


   I'm going to do a squash of the PR and push up, as yetus has completely given up trying to build this


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-810541322


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 18s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/7/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-842351843


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 19s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/24/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-842340255


   +git showing some log output during a terasort test
   https://gist.github.com/steveloughran/8e0aadb51c63f1c3538deda19ee952ae
   
   some of the events (e.g 183c9826b45486e485693808f38e2c4071004bf5dfd4c3ab210f0a21a4235ef8 ) have job ID in the referrer header "ji=job_1620911577786_0006". This is only set during the FS operations the S3A committer performs during task and job, as they're the only ones we know are explicitly related to a job. If we were confident that whichever thread called `Committer.setupTask()` was the only thread making FileSystem API calls for that task then we could set it at the task level.
   
   The`org.apache.hadoop.fs.audit.CommonAuditContext` class provides global and thread local context maps to let apps attach such attributes; the new ManifestCommitter will be setting them so that once ABFS picks up the same auditing, the context info will come down.
   
   Modified versions of Hive, Spark etc could use this API to set any of their context info when a specific thread was scheduled to work for a given query; trying to guess in the hadoop committer isn't the right place
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-833685990


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 18s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/19/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-805988234


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 24s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/3/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-841225106


   I don't get why patch doesn't work. Going to squash the patches, rebase to trunk, retry


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-818230736


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 43 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 36s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 43s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  18m  0s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 41s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 49s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  19m 48s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 1 new + 1936 unchanged - 1 fixed = 1937 total (was 1937)  |
   | +1 :green_heart: |  compile  |  18m  1s |  |  the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  javac  |  18m  1s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1831 unchanged - 1 fixed = 1832 total (was 1832)  |
   | -1 :x: |  blanks  |   0m  0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/blanks-eol.txt) |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 49s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 7 new + 185 unchanged - 4 fixed = 192 total (was 189)  |
   | +1 :green_heart: |  mvnsite  |   2m 24s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javadoc  |   0m 45s | [/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 80 unchanged - 8 fixed = 86 total (was 88)  |
   | -1 :x: |  spotbugs  |   1m 32s | [/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html) |  hadoop-tools/hadoop-aws generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  14m 51s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 27s |  |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |   2m 38s | [/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt) |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 198m 15s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:[line 158] |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:[line 117] |
   |  |  Unwritten field:NoopAuditManager.java:[line 110] |
   | Failed junit tests | hadoop.fs.s3a.audit.TestLoggingAuditor |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint xml |
   | uname | Linux 3f17463dc344 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c43af3040ce328bfd39cd2145b46973bb4b89c47 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/testReport/ |
   | Max. process+thread count | 3152 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus removed a comment on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-818230736


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 43 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 36s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 43s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  18m  0s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 41s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 49s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  19m 48s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 1 new + 1936 unchanged - 1 fixed = 1937 total (was 1937)  |
   | +1 :green_heart: |  compile  |  18m  1s |  |  the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  javac  |  18m  1s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1831 unchanged - 1 fixed = 1832 total (was 1832)  |
   | -1 :x: |  blanks  |   0m  0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/blanks-eol.txt) |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 49s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 7 new + 185 unchanged - 4 fixed = 192 total (was 189)  |
   | +1 :green_heart: |  mvnsite  |   2m 24s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javadoc  |   0m 45s | [/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 80 unchanged - 8 fixed = 86 total (was 88)  |
   | -1 :x: |  spotbugs  |   1m 32s | [/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html) |  hadoop-tools/hadoop-aws generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  14m 51s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 27s |  |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |   2m 38s | [/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt) |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 198m 15s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:[line 158] |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:[line 117] |
   |  |  Unwritten field:NoopAuditManager.java:[line 110] |
   | Failed junit tests | hadoop.fs.s3a.audit.TestLoggingAuditor |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint xml |
   | uname | Linux 3f17463dc344 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c43af3040ce328bfd39cd2145b46973bb4b89c47 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/testReport/ |
   | Max. process+thread count | 3152 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran commented on a change in pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran commented on a change in pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#discussion_r637865440



##########
File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/TestHttpReferrerAuditHeader.java
##########
@@ -0,0 +1,317 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.net.URISyntaxException;
+import java.util.Map;
+import java.util.regex.Matcher;
+
+import com.amazonaws.services.s3.model.GetObjectMetadataRequest;
+import org.junit.Before;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.s3a.audit.impl.LoggingAuditor;
+import org.apache.hadoop.fs.store.audit.AuditSpan;
+import org.apache.hadoop.fs.audit.CommonAuditContext;
+import org.apache.hadoop.fs.store.audit.HttpReferrerAuditHeader;
+import org.apache.hadoop.security.UserGroupInformation;
+
+import static org.apache.hadoop.fs.s3a.audit.AuditTestSupport.loggingAuditConfig;
+import static org.apache.hadoop.fs.s3a.audit.S3AAuditConstants.REFERRER_HEADER_FILTER;
+import static org.apache.hadoop.fs.s3a.audit.S3LogParser.*;
+import static org.apache.hadoop.fs.s3a.impl.HeaderProcessing.HEADER_REFERRER;
+import static org.apache.hadoop.fs.store.audit.HttpReferrerAuditHeader.maybeStripWrappedQuotes;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_FILESYSTEM_ID;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_ID;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_OP;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_PATH;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_PATH2;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_PRINCIPAL;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_THREAD0;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_THREAD1;
+import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_TIMESTAMP;
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * Tests for referrer audit header generation/parsing.
+ */
+public class TestHttpReferrerAuditHeader extends AbstractAuditingTest {
+
+  /**
+   * Logging.
+   */
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TestHttpReferrerAuditHeader.class);
+
+  private LoggingAuditor auditor;
+
+  @Before
+  public void setup() throws Exception {
+    super.setup();
+
+    auditor = (LoggingAuditor) getManager().getAuditor();
+  }
+
+  /**
+   * Creaate the config from {@link AuditTestSupport#loggingAuditConfig()}
+   * and patch in filtering for fields x1, x2, x3.
+   * @return a logging configuration.
+   */
+  protected Configuration createConfig() {
+    final Configuration conf = loggingAuditConfig();
+    conf.set(REFERRER_HEADER_FILTER, "x1, x2, x3");
+    return conf;
+  }
+
+  /**
+   * This verifies that passing a request through the audit manager
+   * causes the http referrer header to be added, that it can
+   * be split to query parameters, and that those parameters match
+   * those of the active wrapped span.
+   */
+  @Test
+  public void testHttpReferrerPatchesTheRequest() throws Throwable {
+    AuditSpan span = span();
+    long ts = span.getTimestamp();
+    GetObjectMetadataRequest request = head();
+    Map<String, String> headers
+        = request.getCustomRequestHeaders();
+    assertThat(headers)
+        .describedAs("Custom headers")
+        .containsKey(HEADER_REFERRER);
+    String header = headers.get(HEADER_REFERRER);
+    LOG.info("Header is {}", header);
+    Map<String, String> params
+        = HttpReferrerAuditHeader.extractQueryParameters(header);
+    assertMapContains(params, PARAM_PRINCIPAL,
+        UserGroupInformation.getCurrentUser().getUserName());
+    assertMapContains(params, PARAM_FILESYSTEM_ID, auditor.getAuditorId());
+    assertMapContains(params, PARAM_OP, OPERATION);
+    assertMapContains(params, PARAM_PATH, PATH_1);
+    assertMapContains(params, PARAM_PATH2, PATH_2);
+    String threadID = CommonAuditContext.currentThreadID();
+    assertMapContains(params, PARAM_THREAD0, threadID);
+    assertMapContains(params, PARAM_THREAD1, threadID);
+    assertMapContains(params, PARAM_ID, span.getSpanId());
+    assertThat(span.getTimestamp())
+        .describedAs("Timestamp of " + span)
+        .isEqualTo(ts);
+
+    assertMapContains(params, PARAM_TIMESTAMP,
+        Long.toString(ts));
+  }
+
+  @Test
+  public void testHeaderComplexPaths() throws Throwable {
+    String p1 = "s3a://dotted.bucket/path: value/subdir";
+    String p2 = "s3a://key/";
+    AuditSpan span = getManager().createSpan(OPERATION, p1, p2);
+    long ts = span.getTimestamp();
+    Map<String, String> params = issueRequestAndExtractParameters();
+    assertMapContains(params, PARAM_PRINCIPAL,
+        UserGroupInformation.getCurrentUser().getUserName());
+    assertMapContains(params, PARAM_FILESYSTEM_ID, auditor.getAuditorId());
+    assertMapContains(params, PARAM_OP, OPERATION);
+    assertMapContains(params, PARAM_PATH, p1);
+    assertMapContains(params, PARAM_PATH2, p2);
+    String threadID = CommonAuditContext.currentThreadID();
+    assertMapContains(params, PARAM_THREAD0, threadID);
+    assertMapContains(params, PARAM_THREAD1, threadID);
+    assertMapContains(params, PARAM_ID, span.getSpanId());
+    assertThat(span.getTimestamp())
+        .describedAs("Timestamp of " + span)
+        .isEqualTo(ts);
+
+    assertMapContains(params, PARAM_TIMESTAMP,
+        Long.toString(ts));
+  }
+
+  /**
+   * Issue a request, then get the header field and parse it to the parameter.
+   * @return map of query params on the referrer header.
+   * @throws URISyntaxException failure to parse the header as a URI.
+   */
+  private Map<String, String> issueRequestAndExtractParameters()
+      throws URISyntaxException {
+    head();
+    return HttpReferrerAuditHeader.extractQueryParameters(
+        auditor.getLastHeader());
+  }
+
+
+  /**
+   * Test that headers are filtered out if configured.
+   */
+  @Test
+  public void testHeaderFiltering() throws Throwable {
+    // add two attributes, x2 will be filtered.
+    AuditSpan span = getManager().createSpan(OPERATION, null, null);
+    auditor.addAttribute("x0", "x0");
+    auditor.addAttribute("x2", "x2");
+    final Map<String, String> params
+        = issueRequestAndExtractParameters();
+    assertThat(params)
+        .doesNotContainKey("x2");
+
+  }
+
+  /**
+   * A real log entry.
+   * This is derived from a real log entry on a test run.
+   * If this needs to be updated, please do it from a real log.
+   * Splitting this up across lines has a tendency to break things, so
+   * be careful making changes.
+   */
+  public static final String SAMPLE_LOG_ENTRY =
+      "183c9826b45486e485693808f38e2c4071004bf5dfd4c3ab210f0a21a4000000"
+          + " bucket-london"
+          + " [13/May/2021:11:26:06 +0000]"
+          + " 109.157.171.174"
+          + " arn:aws:iam::152813717700:user/dev"
+          + " M7ZB7C4RTKXJKTM9"
+          + " REST.PUT.OBJECT"
+          + " fork-0001/test/testParseBrokenCSVFile"
+          + " \"PUT /fork-0001/test/testParseBrokenCSVFile HTTP/1.1\""
+          + " 200"
+          + " -"
+          + " -"
+          + " 794"
+          + " 55"
+          + " 17"
+          + " \"https://audit.example.org/op_create/"
+          + "e8ede3c7-8506-4a43-8268-fe8fcbb510a4-00000278/"
+          + "?op=op_create"
+          + "&p1=fork-0001/test/testParseBrokenCSVFile"
+          + "&pr=alice"
+          + "&ps=2eac5a04-2153-48db-896a-09bc9a2fd132"
+          + "&id=e8ede3c7-8506-4a43-8268-fe8fcbb510a4-00000278&t0=154"
+          + "&fs=e8ede3c7-8506-4a43-8268-fe8fcbb510a4&t1=156&"
+          + "ts=1620905165700\""
+          + " \"Hadoop 3.4.0-SNAPSHOT, java/1.8.0_282 vendor/AdoptOpenJDK\""
+          + " -"
+          + " TrIqtEYGWAwvu0h1N9WJKyoqM0TyHUaY+ZZBwP2yNf2qQp1Z/0="
+          + " SigV4"
+          + " ECDHE-RSA-AES128-GCM-SHA256"
+          + " AuthHeader"
+          + " bucket-london.s3.eu-west-2.amazonaws.com"
+          + " TLSv1.2";
+
+  private static final String DESCRIPTION = String.format(
+      "log entry %s split by %s", SAMPLE_LOG_ENTRY,
+      LOG_ENTRY_PATTERN);
+
+  /**
+   * Match the log entry and validate the results.
+   */
+  @Test
+  public void testMatchAWSLogEntry() throws Throwable {
+
+    LOG.info("Matcher pattern is\n'{}'", LOG_ENTRY_PATTERN);
+    LOG.info("Log entry is\n'{}'", SAMPLE_LOG_ENTRY);
+    final Matcher matcher = LOG_ENTRY_PATTERN.matcher(SAMPLE_LOG_ENTRY);
+
+    // match the pattern against the entire log entry.
+    assertThat(matcher.matches())

Review comment:
       ooh, that was a bug in my assert. Added an ` .isTrue()` at the end. well spotted
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] bogthe commented on a change in pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
bogthe commented on a change in pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#discussion_r633141325



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/store/HttpReferrerAuditHeader.java
##########
@@ -0,0 +1,500 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.store;
+
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.nio.charset.StandardCharsets;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Set;
+import java.util.StringJoiner;
+import java.util.function.Supplier;
+import java.util.stream.Collectors;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.fs.store.audit.CommonAuditContext;
+import org.apache.http.NameValuePair;
+import org.apache.http.client.utils.URLEncodedUtils;
+
+import static java.util.Objects.requireNonNull;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_ID;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_OP;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_PATH;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_PATH2;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.REFERRER_ORIGIN_HOST;
+
+/**
+ * Contains all the logic for generating an HTTP "Referer"
+ * entry; includes escaping query params.
+ * Tests for this are in
+ * {@code org.apache.hadoop.fs.s3a.audit.TestHttpReferrerAuditHeader}
+ * so as to verify that header generation in the S3A auditors, and
+ * S3 log parsing, all work.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Unstable
+public final class HttpReferrerAuditHeader {
+
+  /**
+   * Format of path to build: {@value}.
+   * the params passed in are (context ID, span ID, op)
+   */
+  public static final String REFERRER_PATH_FORMAT = "/%3$s/%2$s/";
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(HttpReferrerAuditHeader.class);
+
+  /**
+   * Log for warning of problems creating headers will only log of
+   * a problem once per process instance.
+   * This is to avoid logs being flooded with errors.
+   */
+  private static final LogExactlyOnce WARN_OF_URL_CREATION =
+      new LogExactlyOnce(LOG);
+
+  /** Context ID. */
+  private final String contextId;
+
+  /** operation name. */
+  private final String operationName;
+
+  /** Span ID. */
+  private final String spanId;
+
+  /** optional first path. */
+  private final String path1;
+
+  /** optional second path. */
+  private final String path2;
+
+  /**
+   * The header as created in the constructor; used in toString().
+   * A new header is built on demand in {@link #buildHttpReferrer()}
+   * so that evaluated attributes are dynamically evaluated
+   * in the correct thread/place.
+   */
+  private final String initialHeader;
+
+  /**
+   * Map of simple attributes.
+   */
+  private final Map<String, String> attributes;
+
+  /**
+   * Parameters dynamically evaluated on the thread just before
+   * the request is made.
+   */
+  private final Map<String, Supplier<String>> evaluated;
+
+  /**
+   * Elements to filter from the final header.
+   */
+  private final Set<String> filter;
+
+  /**
+   * Instantiate.
+   *
+   * Context and operationId are expected to be well formed
+   * numeric/hex strings, at least adequate to be
+   * used as individual path elements in a URL.
+   */
+  private HttpReferrerAuditHeader(
+      final Builder builder) {
+    this.contextId = requireNonNull(builder.contextId);
+    this.evaluated = builder.evaluated;
+    this.filter = builder.filter;
+    this.operationName = requireNonNull(builder.operationName);
+    this.path1 = builder.path1;
+    this.path2 = builder.path2;
+    this.spanId = requireNonNull(builder.spanId);
+
+    // copy the parameters from the builder and extend
+    attributes = builder.attributes;
+
+    addAttribute(PARAM_OP, operationName);
+    addAttribute(PARAM_PATH, path1);
+    addAttribute(PARAM_PATH2, path2);
+    addAttribute(PARAM_ID, spanId);
+
+    // patch in global context values where not set
+    Iterable<Map.Entry<String, String>> globalContextValues
+        = builder.globalContextValues;
+    if (globalContextValues != null) {
+      for (Map.Entry<String, String> entry : globalContextValues) {
+        attributes.putIfAbsent(entry.getKey(), entry.getValue());

Review comment:
       What are the implications of merging multiple `globalContextValues` maps into a single one (i.e. `attributes`). Will there be a situation where different contexts have the same `key` but different `values`? It doesn't seem too bad, maybe a warning in the comments / documentation for this scenario is enough?

##########
File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/store/audit/TestCommonAuditContext.java
##########
@@ -0,0 +1,161 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.store.audit;
+
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.stream.Collectors;
+import java.util.stream.StreamSupport;
+
+import org.assertj.core.api.AbstractStringAssert;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_COMMAND;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_PROCESS;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_THREAD1;
+import static org.apache.hadoop.fs.store.audit.CommonAuditContext.PROCESS_ID;
+import static org.apache.hadoop.fs.store.audit.CommonAuditContext.clearGlobalContextEntry;
+import static org.apache.hadoop.fs.store.audit.CommonAuditContext.currentAuditContext;
+import static org.apache.hadoop.fs.store.audit.CommonAuditContext.getGlobalContextEntry;
+import static org.apache.hadoop.fs.store.audit.CommonAuditContext.getGlobalContextEntries;
+import static org.apache.hadoop.fs.store.audit.CommonAuditContext.noteEntryPoint;
+import static org.apache.hadoop.fs.store.audit.CommonAuditContext.setGlobalContextEntry;
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * Tests of the common audit context.
+ */
+public class TestCommonAuditContext extends AbstractHadoopTestBase {
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TestCommonAuditContext.class);
+
+  private final CommonAuditContext context = currentAuditContext();
+  /**
+   * We can set, get and enumerate global context values.
+   */
+  @Test
+  public void testGlobalSetGetEnum() throws Throwable {
+
+    String s = "command";
+    setGlobalContextEntry(PARAM_COMMAND, s);
+    assertGlobalEntry(PARAM_COMMAND)
+        .isEqualTo(s);
+    // and the iterators.
+    List<Map.Entry<String, String>> list = StreamSupport
+        .stream(getGlobalContextEntries().spliterator(),
+            false)
+        .filter(e -> e.getKey().equals(PARAM_COMMAND))
+        .collect(Collectors.toList());
+    assertThat(list)
+        .hasSize(1)
+        .allMatch(e -> e.getValue().equals(s));
+  }
+
+  @Test
+  public void testVerifyProcessID() throws Throwable {
+    assertThat(
+        getGlobalContextEntry(PARAM_PROCESS))
+        .describedAs("global context value of %s", PARAM_PROCESS)
+        .isEqualTo(PROCESS_ID);
+  }
+
+
+  @Test
+  public void testNullValue() throws Throwable {
+    assertThat(context.get(PARAM_PROCESS))
+        .describedAs("Value of context element %s", PARAM_PROCESS)
+        .isNull();
+  }
+
+  @Test
+  public void testThreadId() throws Throwable {
+    String t1 = getContextValue(PARAM_THREAD1);
+    Long tid = Long.valueOf(t1);
+    assertThat(tid).describedAs("thread ID")
+        .isEqualTo(Thread.currentThread().getId());
+  }
+
+  /**
+   * Verify functions are dynamically evaluated.
+   */
+  @Test
+  public void testDynamicEval() throws Throwable {
+    context.reset();
+    final AtomicBoolean ab = new AtomicBoolean(false);
+    context.put("key", () ->
+        Boolean.toString(ab.get()));
+    assertContextValue("key")
+        .isEqualTo("false");
+    // update the reference and the next get call will
+    // pick up the new value.
+    ab.set(true);
+    assertContextValue("key")
+        .isEqualTo("true");
+  }
+
+  private String getContextValue(final String key) {
+    String val = context.get(key);
+    assertThat(val).isNotBlank();
+    return val;
+  }
+
+  /**
+   * Start an assertion on a context value.
+   * @param key key to look up
+   * @return an assert which can be extended call
+   */
+  private AbstractStringAssert<?> assertContextValue(final String key) {
+    String val = context.get(key);
+    return assertThat(val)
+        .describedAs("Value of context element %s", key)
+        .isNotBlank();
+  }
+
+  @Test
+  public void testNoteEntryPoint() throws Throwable {
+    setAndAssertEntryPoint(this).isEqualTo("TestCommonAuditContext");
+

Review comment:
       nit: extra space

##########
File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/StoreContext.java
##########
@@ -117,13 +122,17 @@
   /**
    * Source of time.
    */
-  private ITtlTimeProvider timeProvider;
+
+  /** Time source for S3Guard TTLs. */
+  private final ITtlTimeProvider timeProvider;
+
+  /** Operation Auditor. */
+  private final AuditSpanSource<AuditSpanS3A> auditor;
 
   /**
    * Instantiate.
-   * @deprecated as public method: use {@link StoreContextBuilder}.
    */
-  public StoreContext(
+  StoreContext(

Review comment:
       nit: is access modifier intentionally left out?

##########
File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
##########
@@ -2430,13 +2749,16 @@ PutObjectResult putObjectDirect(PutObjectRequest putObjectRequest)
     LOG.debug("PUT {} bytes to {}", len, putObjectRequest.getKey());
     incrementPutStartStatistics(len);
     try {
-      PutObjectResult result = s3.putObject(putObjectRequest);
+      PutObjectResult result = trackDurationOfSupplier(
+          getDurationTrackerFactory(),
+          OBJECT_PUT_REQUESTS.getSymbol(), () ->
+              s3.putObject(putObjectRequest));
       incrementPutCompletedStatistics(true, len);
       // update metadata
       finishedWrite(putObjectRequest.getKey(), len,
           result.getETag(), result.getVersionId(), null);
       return result;
-    } catch (AmazonClientException e) {
+    } catch (SdkBaseException e) {

Review comment:
       Any reason for moving to `SdkBaseException`? I see this `putObjectDirect` method signals it's throwing `AmazonClientException`, no bugs just small inconsistency.  

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/store/audit/AuditingFunctions.java
##########
@@ -0,0 +1,118 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.store.audit;
+
+import javax.annotation.Nullable;
+import java.util.concurrent.Callable;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.util.functional.CallableRaisingIOE;
+import org.apache.hadoop.util.functional.FunctionRaisingIOE;
+import org.apache.hadoop.util.functional.InvocationRaisingIOE;
+
+/**
+ * Static methods to assist in working with Audit Spans.
+ * the {@code withinX} calls take a span and a closure/function etc.
+ * and return a new function of the same types but which will
+ * activate and the span.
+ * They do not deactivate it afterwards to avoid accidentally deactivating
+ * the already-active span during a chain of operations in the same thread.
+ * All they do is ensure that the given span is guaranteed to be
+ * active when the passed in callable/function/invokable is evaluated.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Unstable
+public class AuditingFunctions {
+
+  /**
+   * Given a callable, return a new callable which
+   * activates and deactivates the span around the inner invocation.

Review comment:
       Comment out of date. This mentions the callable `activates` and `deactivates` the span while to class commend mentions that `They do not deactivate it afterwards...`. Callable also contains no call to deactivate.
   
   Same comment applies for all methods in this class. 

##########
File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RequestFactoryImpl.java
##########
@@ -0,0 +1,695 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+
+import com.amazonaws.AmazonWebServiceRequest;
+import com.amazonaws.services.s3.model.AbortMultipartUploadRequest;
+import com.amazonaws.services.s3.model.CannedAccessControlList;
+import com.amazonaws.services.s3.model.CompleteMultipartUploadRequest;
+import com.amazonaws.services.s3.model.CopyObjectRequest;
+import com.amazonaws.services.s3.model.DeleteObjectRequest;
+import com.amazonaws.services.s3.model.DeleteObjectsRequest;
+import com.amazonaws.services.s3.model.GetObjectMetadataRequest;
+import com.amazonaws.services.s3.model.GetObjectRequest;
+import com.amazonaws.services.s3.model.InitiateMultipartUploadRequest;
+import com.amazonaws.services.s3.model.ListMultipartUploadsRequest;
+import com.amazonaws.services.s3.model.ListNextBatchOfObjectsRequest;
+import com.amazonaws.services.s3.model.ListObjectsRequest;
+import com.amazonaws.services.s3.model.ListObjectsV2Request;
+import com.amazonaws.services.s3.model.ObjectListing;
+import com.amazonaws.services.s3.model.ObjectMetadata;
+import com.amazonaws.services.s3.model.PartETag;
+import com.amazonaws.services.s3.model.PutObjectRequest;
+import com.amazonaws.services.s3.model.SSEAwsKeyManagementParams;
+import com.amazonaws.services.s3.model.SSECustomerKey;
+import com.amazonaws.services.s3.model.SelectObjectContentRequest;
+import com.amazonaws.services.s3.model.UploadPartRequest;
+import org.apache.hadoop.thirdparty.com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.PathIOException;
+import org.apache.hadoop.fs.s3a.Retries;
+import org.apache.hadoop.fs.s3a.S3AEncryptionMethods;
+import org.apache.hadoop.fs.s3a.api.RequestFactory;
+import org.apache.hadoop.fs.s3a.auth.delegation.EncryptionSecretOperations;
+import org.apache.hadoop.fs.s3a.auth.delegation.EncryptionSecrets;
+
+import static org.apache.commons.lang3.StringUtils.isNotEmpty;
+import static org.apache.hadoop.fs.s3a.impl.InternalConstants.DEFAULT_UPLOAD_PART_COUNT_LIMIT;
+import static org.apache.hadoop.thirdparty.com.google.common.base.Preconditions.checkArgument;
+import static org.apache.hadoop.thirdparty.com.google.common.base.Preconditions.checkNotNull;
+
+/**
+ * The standard implementation of the request factory.
+ * This creates AWS SDK request classes for the specific bucket,
+ * with standard options/headers set.
+ * It is also where custom setting parameters can take place.
+ *
+ * All creation of AWS S3 requests MUST be through this class so that
+ * common options (encryption etc.) can be added here,
+ * and so that any chained transformation of requests can be applied.
+ *
+ * This is where audit span information is added to the requests,
+ * until it is done in the AWS SDK itself.
+ *
+ * All created requests will be passed through
+ * {@link PrepareRequest#prepareRequest(AmazonWebServiceRequest)} before
+ * being returned to the caller.
+ */
+public class RequestFactoryImpl implements RequestFactory {
+
+  public static final Logger LOG = LoggerFactory.getLogger(
+      RequestFactoryImpl.class);
+
+  /**
+   * Target bucket.
+   */
+  private final String bucket;
+
+  /**
+   * Encryption secrets.
+   */
+  private EncryptionSecrets encryptionSecrets;
+
+  /**
+   * ACL For new objects.
+   */
+  private final CannedAccessControlList cannedACL;
+
+  /**
+   * Max number of multipart entries allowed in a large
+   * upload. Tunable for testing only.
+   */
+  private final long multipartPartCountLimit;
+
+  /**
+   * Requester Pays.
+   * This is to be wired up in a PR with its
+   * own tests and docs.
+   */
+  private final boolean requesterPays;
+
+  /**
+   * Callback to prepare requests.
+   */
+  private final PrepareRequest requestPreparer;
+
+  /**
+   * Constructor.
+   * @param builder builder with all the configuration.
+   */
+  protected RequestFactoryImpl(
+      final RequestFactoryBuilder builder) {
+    this.bucket = builder.bucket;
+    this.cannedACL = builder.cannedACL;
+    this.encryptionSecrets = builder.encryptionSecrets;
+    this.multipartPartCountLimit = builder.multipartPartCountLimit;
+    this.requesterPays = builder.requesterPays;
+    this.requestPreparer = builder.requestPreparer;
+  }
+
+  /**
+   * Preflight preparation of AWS request.
+   * @param <T> web service request
+   * @return prepared entry.
+   */
+  @Retries.OnceRaw
+  private <T extends AmazonWebServiceRequest> T prepareRequest(T t) {
+    return requestPreparer != null
+        ? requestPreparer.prepareRequest(t)
+        : t;
+  }
+
+  /**
+   * Get the canned ACL of this FS.
+   * @return an ACL, if any
+   */
+  @Override
+  public CannedAccessControlList getCannedACL() {
+    return cannedACL;
+  }
+
+  /**
+   * Get the target bucket.
+   * @return the bucket.
+   */
+  protected String getBucket() {
+    return bucket;
+  }
+
+  /**
+   * Create the AWS SDK structure used to configure SSE,
+   * if the encryption secrets contain the information/settings for this.
+   * @return an optional set of KMS Key settings
+   */
+  @Override
+  public Optional<SSEAwsKeyManagementParams> generateSSEAwsKeyParams() {
+    return EncryptionSecretOperations.createSSEAwsKeyManagementParams(
+        encryptionSecrets);
+  }
+
+  /**
+   * Create the SSE-C structure for the AWS SDK, if the encryption secrets
+   * contain the information/settings for this.
+   * This will contain a secret extracted from the bucket/configuration.
+   * @return an optional customer key.
+   */
+  @Override
+  public Optional<SSECustomerKey> generateSSECustomerKey() {
+    return EncryptionSecretOperations.createSSECustomerKey(
+        encryptionSecrets);
+  }
+
+  /**
+   * Get the encryption algorithm of this endpoint.
+   * @return the encryption algorithm.
+   */
+  @Override
+  public S3AEncryptionMethods getServerSideEncryptionAlgorithm() {
+    return encryptionSecrets.getEncryptionMethod();
+  }
+
+  /**
+   * Sets server side encryption parameters to the part upload
+   * request when encryption is enabled.
+   * @param request upload part request
+   */
+  protected void setOptionalUploadPartRequestParameters(
+      UploadPartRequest request) {
+    generateSSECustomerKey().ifPresent(request::setSSECustomerKey);
+  }
+
+  /**
+   * Sets server side encryption parameters to the GET reuquest.
+   * request when encryption is enabled.
+   * @param request upload part request
+   */
+  protected void setOptionalGetObjectMetadataParameters(
+      GetObjectMetadataRequest request) {
+    generateSSECustomerKey().ifPresent(request::setSSECustomerKey);
+  }
+
+  /**
+   * Set the optional parameters when initiating the request (encryption,
+   * headers, storage, etc).
+   * @param request request to patch.
+   */
+  protected void setOptionalMultipartUploadRequestParameters(
+      InitiateMultipartUploadRequest request) {
+    generateSSEAwsKeyParams().ifPresent(request::setSSEAwsKeyManagementParams);
+    generateSSECustomerKey().ifPresent(request::setSSECustomerKey);
+  }
+
+  /**
+   * Set the optional parameters for a PUT request.
+   * @param request request to patch.
+   */
+  protected void setOptionalPutRequestParameters(PutObjectRequest request) {
+    generateSSEAwsKeyParams().ifPresent(request::setSSEAwsKeyManagementParams);
+    generateSSECustomerKey().ifPresent(request::setSSECustomerKey);
+  }
+
+  /**
+   * Set the optional metadata for an object being created or copied.
+   * @param metadata to update.
+   */
+  protected void setOptionalObjectMetadata(ObjectMetadata metadata) {
+    final S3AEncryptionMethods algorithm
+        = getServerSideEncryptionAlgorithm();
+    if (S3AEncryptionMethods.SSE_S3 == algorithm) {
+      metadata.setSSEAlgorithm(algorithm.getMethod());
+    }
+  }
+
+  /**
+   * Create a new object metadata instance.
+   * Any standard metadata headers are added here, for example:
+   * encryption.
+   *
+   * @param length length of data to set in header; Ignored if negative
+   * @return a new metadata instance
+   */
+  @Override
+  public ObjectMetadata newObjectMetadata(long length) {
+    final ObjectMetadata om = new ObjectMetadata();
+    setOptionalObjectMetadata(om);
+    if (length >= 0) {
+      om.setContentLength(length);
+    }
+    return om;
+  }
+
+  @Override
+  public CopyObjectRequest newCopyObjectRequest(String srcKey,
+      String dstKey,
+      ObjectMetadata srcom) {
+    CopyObjectRequest copyObjectRequest =
+        new CopyObjectRequest(getBucket(), srcKey, getBucket(), dstKey);
+    ObjectMetadata dstom = newObjectMetadata(srcom.getContentLength());
+    HeaderProcessing.cloneObjectMetadata(srcom, dstom);
+    setOptionalObjectMetadata(dstom);
+    copyEncryptionParameters(srcom, copyObjectRequest);
+    copyObjectRequest.setCannedAccessControlList(cannedACL);
+    copyObjectRequest.setNewObjectMetadata(dstom);
+    Optional.ofNullable(srcom.getStorageClass())
+        .ifPresent(copyObjectRequest::setStorageClass);
+    return prepareRequest(copyObjectRequest);
+  }
+
+  /**
+   * Propagate encryption parameters from source file if set else use the
+   * current filesystem encryption settings.
+   * @param srcom source object metadata.
+   * @param copyObjectRequest copy object request body.
+   */
+  protected void copyEncryptionParameters(
+      ObjectMetadata srcom,
+      CopyObjectRequest copyObjectRequest) {
+    String sourceKMSId = srcom.getSSEAwsKmsKeyId();
+    if (isNotEmpty(sourceKMSId)) {
+      // source KMS ID is propagated
+      LOG.debug("Propagating SSE-KMS settings from source {}",
+          sourceKMSId);
+      copyObjectRequest.setSSEAwsKeyManagementParams(
+          new SSEAwsKeyManagementParams(sourceKMSId));
+    }
+    switch (getServerSideEncryptionAlgorithm()) {
+    case SSE_S3:
+      /* no-op; this is set in destination object metadata */
+      break;
+
+    case SSE_C:
+      generateSSECustomerKey().ifPresent(customerKey -> {
+        copyObjectRequest.setSourceSSECustomerKey(customerKey);
+        copyObjectRequest.setDestinationSSECustomerKey(customerKey);
+      });
+      break;
+
+    case SSE_KMS:
+      generateSSEAwsKeyParams().ifPresent(
+          copyObjectRequest::setSSEAwsKeyManagementParams);
+      break;
+    default:
+    }
+  }
+  /**
+   * Create a putObject request.
+   * Adds the ACL and metadata
+   * @param key key of object
+   * @param metadata metadata header
+   * @param srcfile source file
+   * @return the request
+   */
+  @Override
+  public PutObjectRequest newPutObjectRequest(String key,
+      ObjectMetadata metadata, File srcfile) {
+    Preconditions.checkNotNull(srcfile);
+    PutObjectRequest putObjectRequest = new PutObjectRequest(getBucket(), key,
+        srcfile);
+    setOptionalPutRequestParameters(putObjectRequest);
+    putObjectRequest.setCannedAcl(cannedACL);
+    putObjectRequest.setMetadata(metadata);
+    return prepareRequest(putObjectRequest);
+  }
+
+  /**
+   * Create a {@link PutObjectRequest} request.
+   * The metadata is assumed to have been configured with the size of the
+   * operation.
+   * @param key key of object
+   * @param metadata metadata header
+   * @param inputStream source data.
+   * @return the request
+   */
+  @Override
+  public PutObjectRequest newPutObjectRequest(String key,
+      ObjectMetadata metadata,
+      InputStream inputStream) {
+    Preconditions.checkNotNull(inputStream);
+    Preconditions.checkArgument(isNotEmpty(key), "Null/empty key");
+    PutObjectRequest putObjectRequest = new PutObjectRequest(getBucket(), key,
+        inputStream, metadata);
+    setOptionalPutRequestParameters(putObjectRequest);
+    putObjectRequest.setCannedAcl(cannedACL);
+    return prepareRequest(putObjectRequest);
+  }
+
+  @Override
+  public PutObjectRequest newDirectoryMarkerRequest(String directory) {
+    String key = directory.endsWith("/")
+        ? directory
+        : (directory + "/");
+    // an input stream which is laways empty
+    final InputStream im = new InputStream() {
+      @Override
+      public int read() throws IOException {
+        return -1;
+      }
+    };
+    // preparation happens in here
+    final ObjectMetadata md = newObjectMetadata(0L);
+    md.setContentType(HeaderProcessing.CONTENT_TYPE_X_DIRECTORY);
+    PutObjectRequest putObjectRequest =
+        newPutObjectRequest(key, md, im);
+    return putObjectRequest;
+  }
+
+  @Override
+  public ListMultipartUploadsRequest
+      newListMultipartUploadsRequest(String prefix) {
+    ListMultipartUploadsRequest request = new ListMultipartUploadsRequest(
+        getBucket());
+    if (prefix != null) {
+      request.setPrefix(prefix);
+    }
+    return prepareRequest(request);
+  }
+
+  @Override
+  public AbortMultipartUploadRequest newAbortMultipartUploadRequest(
+      String destKey,
+      String uploadId) {
+    return prepareRequest(new AbortMultipartUploadRequest(getBucket(),
+        destKey,
+        uploadId));
+  }
+
+  @Override
+  public InitiateMultipartUploadRequest newMultipartUploadRequest(
+      String destKey) {
+    final InitiateMultipartUploadRequest initiateMPURequest =
+        new InitiateMultipartUploadRequest(getBucket(),
+            destKey,
+            newObjectMetadata(-1));
+    initiateMPURequest.setCannedACL(getCannedACL());
+    setOptionalMultipartUploadRequestParameters(initiateMPURequest);
+    return prepareRequest(initiateMPURequest);
+  }
+
+  @Override
+  public CompleteMultipartUploadRequest newCompleteMultipartUploadRequest(
+      String destKey,
+      String uploadId,
+      List<PartETag> partETags) {
+    // a copy of the list is required, so that the AWS SDK doesn't
+    // attempt to sort an unmodifiable list.
+    return prepareRequest(new CompleteMultipartUploadRequest(bucket,
+        destKey, uploadId, new ArrayList<>(partETags)));
+

Review comment:
       nit: empty line

##########
File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/S3LogParser.java
##########
@@ -0,0 +1,306 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.util.regex.Pattern;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ * Class to help parse AWS S3 Logs.
+ * see https://docs.aws.amazon.com/AmazonS3/latest/userguide/LogFormat.html
+ *
+ * Getting the regexp right is surprisingly hard; this class does it
+ * explicitly and names each group in the process.
+ * All group names are included in {@link #GROUPS} in the order
+ * within the log entries.
+ *
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Unstable
+public class S3LogParser {
+
+  /**
+   * Simple entry: anything up to a space.
+   * {@value}.
+   */
+  private static final String SIMPLE = "[^ ]*";
+
+  /**
+   * Date/Time. Everything within square braces.
+   * {@value}.
+   */
+  private static final String DATETIME = "\\[(.*?)\\]";
+
+  /**
+   * A natural number or "-".
+   * {@value}.
+   */
+  private static final String NUMBER = "(-|[0-9]*)";
+
+  /**
+   * A Quoted field or "-".
+   * {@value}.
+   */
+  private static final String QUOTED = "(-|\"[^\"]*\")";
+
+
+  /**
+   * An entry in the regexp.
+   * @param name name of the group
+   * @param pattern pattern to use in the regexp
+   * @return the pattern for the regexp
+   */
+  private static String e(String name , String pattern) {
+    return String.format("(?<%s>%s) ", name, pattern);
+  }
+
+  /**
+   * An entry in the regexp.
+   * @param name name of the group
+   * @param pattern pattern to use in the regexp
+   * @return the pattern for the regexp
+   */
+  private static String eNoTrailing(String name , String pattern) {
+    return String.format("(?<%s>%s)", name, pattern);
+  }
+
+
+  // simple entry
+
+  /**
+   * Simple entry using the {@link #SIMPLE} pattern.
+   * @param name name of the element (for code clarity only)
+   * @return the pattern for the regexp
+   */
+  private static String e(String name) {
+    return e(name, SIMPLE);
+  }
+
+  /**
+   * Quoted entry using the {@link #QUOTED} pattern.
+   * @param name name of the element (for code clarity only)
+   * @return the pattern for the regexp
+   */
+  private static String Q(String name) {

Review comment:
       nit: Why is this capital `Q` and the other is lowercase `e`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus removed a comment on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-825791293


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 43 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 36s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 25s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  17m 53s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 42s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 27s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 56s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  19m 56s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/13/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 1 new + 1936 unchanged - 1 fixed = 1937 total (was 1937)  |
   | +1 :green_heart: |  compile  |  17m 57s |  |  the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  javac  |  17m 57s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/13/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1831 unchanged - 1 fixed = 1832 total (was 1832)  |
   | -1 :x: |  blanks  |   0m  0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/13/artifact/out/blanks-eol.txt) |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 41s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/13/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 7 new + 185 unchanged - 4 fixed = 192 total (was 189)  |
   | +1 :green_heart: |  mvnsite  |   2m 25s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javadoc  |   0m 45s | [/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/13/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 80 unchanged - 8 fixed = 86 total (was 88)  |
   | -1 :x: |  spotbugs  |   1m 33s | [/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/13/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html) |  hadoop-tools/hadoop-aws generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  15m  1s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 23s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 12s |  |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 196m 22s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:[line 158] |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:[line 117] |
   |  |  Unwritten field:NoopAuditManager.java:[line 110] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/13/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint xml |
   | uname | Linux c48af188307e 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1c7c9abed2efc53b641ee2feae96b455904660c9 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/13/testReport/ |
   | Max. process+thread count | 1258 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/13/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] steveloughran edited a comment on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
steveloughran edited a comment on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-847009025


   tests aws london `-Dparallel-tests -DtestsThreadCount=5 -Dmarkers=delete -Dscale` run all good, getting a bit slow (tombstones?)
   
   ```
   [INFO]
   [WARNING] Tests run: 151, Failures: 0, Errors: 0, Skipped: 17
   [INFO]
   [INFO] ------------------------------------------------------------------------
   [INFO] BUILD SUCCESS
   [INFO] ------------------------------------------------------------------------
   [INFO] Total time:  37:52 min (Wall Clock)
   [INFO] Finished at: 2021-05-24T12:48:00+01:00
   [INFO] ------------------------------------------------------------------------
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-818255439


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 43 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 24s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 32s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  17m 51s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 22s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 33s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 30s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 49s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  19m 49s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 1 new + 1937 unchanged - 1 fixed = 1938 total (was 1938)  |
   | +1 :green_heart: |  compile  |  17m 55s |  |  the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  javac  |  17m 55s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1832 unchanged - 1 fixed = 1833 total (was 1833)  |
   | -1 :x: |  blanks  |   0m  0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/blanks-eol.txt) |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 44s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 7 new + 185 unchanged - 4 fixed = 192 total (was 189)  |
   | +1 :green_heart: |  mvnsite  |   2m 23s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javadoc  |   0m 45s | [/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 80 unchanged - 8 fixed = 86 total (was 88)  |
   | -1 :x: |  spotbugs  |   1m 32s | [/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html) |  hadoop-tools/hadoop-aws generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  14m 44s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 25s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m  9s |  |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 196m 13s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:[line 158] |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:[line 117] |
   |  |  Unwritten field:NoopAuditManager.java:[line 110] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint xml |
   | uname | Linux cd8a0414976c 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 77bd725f69ce774b2c3c9618413d952d5a0cb156 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/testReport/ |
   | Max. process+thread count | 1738 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-810544059


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 20s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/8/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-827887228


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 29s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/15/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-829365518


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 19s |  |  https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/18/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus removed a comment on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-818255439


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 43 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 24s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 32s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  17m 51s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 22s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 33s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 30s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 49s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  19m 49s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 1 new + 1937 unchanged - 1 fixed = 1938 total (was 1938)  |
   | +1 :green_heart: |  compile  |  17m 55s |  |  the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  javac  |  17m 55s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1832 unchanged - 1 fixed = 1833 total (was 1833)  |
   | -1 :x: |  blanks  |   0m  0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/blanks-eol.txt) |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 44s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 7 new + 185 unchanged - 4 fixed = 192 total (was 189)  |
   | +1 :green_heart: |  mvnsite  |   2m 23s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javadoc  |   0m 45s | [/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 80 unchanged - 8 fixed = 86 total (was 88)  |
   | -1 :x: |  spotbugs  |   1m 32s | [/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html) |  hadoop-tools/hadoop-aws generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  14m 44s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 25s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m  9s |  |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 196m 13s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.checkAccess(Path, S3AFileStatus, FsAction)  At NoopAuditManager.java:[line 158] |
   |  |  Read of unwritten field auditor in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:in org.apache.hadoop.fs.s3a.audit.impl.NoopAuditManager.getUnbondedSpan()  At NoopAuditManager.java:[line 117] |
   |  |  Unwritten field:NoopAuditManager.java:[line 110] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint xml |
   | uname | Linux cd8a0414976c 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 77bd725f69ce774b2c3c9618413d952d5a0cb156 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/testReport/ |
   | Max. process+thread count | 1738 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-847305964


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   1m  1s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 44 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 50s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 22s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  29m 16s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  24m 24s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   4m 39s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 30s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   4m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 49s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 37s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |  22m 37s | [/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/28/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) |  root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 3 new + 1995 unchanged - 3 fixed = 1998 total (was 1998)  |
   | +1 :green_heart: |  compile  |  19m  5s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  javac  |  19m  5s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/28/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) |  root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 3 new + 1871 unchanged - 3 fixed = 1874 total (was 1874)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   3m 57s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/28/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 5 new + 188 unchanged - 5 fixed = 193 total (was 193)  |
   | +1 :green_heart: |  mvnsite  |   2m 19s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  hadoop-common in the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 0 new + 63 unchanged - 25 fixed = 63 total (was 88)  |
   | +1 :green_heart: |  spotbugs  |   3m 59s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 34s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 17s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 48s |  |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 232m 59s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/28/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint xml |
   | uname | Linux d5165efc37e3 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / beba5f55b72911f2bcb5b8f50daacec77cc2cd80 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/28/testReport/ |
   | Max. process+thread count | 3135 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/28/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-841370155


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 44 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  2s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  7s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  19m 20s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 10s |  |  trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 54s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m  9s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 18s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  22m 18s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/23/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 2 new + 1988 unchanged - 2 fixed = 1990 total (was 1990)  |
   | +1 :green_heart: |  compile  |  19m 21s |  |  the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  javac  |  19m 21s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/23/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 2 new + 1887 unchanged - 2 fixed = 1889 total (was 1889)  |
   | -1 :x: |  blanks  |   0m  0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/23/artifact/out/blanks-eol.txt) |  The patch has 4 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 36s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/23/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 21 new + 188 unchanged - 5 fixed = 209 total (was 193)  |
   | +1 :green_heart: |  mvnsite  |   2m 20s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javadoc  |   0m 42s | [/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/23/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 63 unchanged - 25 fixed = 64 total (was 88)  |
   | -1 :x: |  spotbugs  |   1m 32s | [/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/23/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html) |  hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  15m 29s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 59s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 11s |  |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 205m  8s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  org.apache.hadoop.fs.s3a.audit.S3LogParser.GROUPS should be package protected  At S3LogParser.java: At S3LogParser.java:[line 268] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/23/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint xml |
   | uname | Linux 5ba84140b6b6 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8f9cf0292a685eb8a3d670e1d2b761295866a914 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/23/testReport/ |
   | Max. process+thread count | 1312 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/23/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org