You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2022/10/06 11:41:00 UTC
[jira] [Commented] (HADOOP-18465) S3A server-side encryption tests fail before checking encryption tests should skip
[ https://issues.apache.org/jira/browse/HADOOP-18465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17613457#comment-17613457 ]
ASF GitHub Bot commented on HADOOP-18465:
-----------------------------------------
steveloughran commented on code in PR #4925:
URL: https://github.com/apache/hadoop/pull/4925#discussion_r988925268
##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractTestS3AEncryption.java:
##########
@@ -78,6 +78,14 @@ protected void patchConfigurationEncryptionSettings(
0, 1, 2, 3, 4, 5, 254, 255, 256, 257, 2 ^ 12 - 1
};
+ /**
+ * Skips the tests if encryption is not enabled in configuration.
+ *
+ * @implNote We can use {@link #createConfiguration()} here since
Review Comment:
not sure if maven is set up to use those tags; no reason why we shouldn't start though
> S3A server-side encryption tests fail before checking encryption tests should skip
> ----------------------------------------------------------------------------------
>
> Key: HADOOP-18465
> URL: https://issues.apache.org/jira/browse/HADOOP-18465
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Reporter: Daniel Carl Jones
> Assignee: Daniel Carl Jones
> Priority: Minor
> Labels: pull-request-available
>
> When setting {{test.fs.s3a.encryption.enabled}} to {{{}false{}}}, this is not respected by ITestS3AEncryptionSSEKMSDefaultKey. See failure below.
>
> {code:java}
> ------------------------------------------------------------------------------
> Test set: org.apache.hadoop.fs.s3a.ITestS3AEncryptionSSEKMSDefaultKey
> -------------------------------------------------------------------------------
> Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 6.053 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3AEncryptionSSEKMSDefaultKey
> testEncryptionOverRename(org.apache.hadoop.fs.s3a.ITestS3AEncryptionSSEKMSDefaultKey) Time elapsed: 3.063 s <<< ERROR!
> org.apache.hadoop.fs.s3a.AWSBadRequestException: PUT 0-byte object on fork-0002/test: com.amazonaws.services.s3.model.AmazonS3Exception: SSE unavailable (Service: Amazon S3; Status Code: 400; Proxy: null)
> at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:242)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124)
> at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:376)
> at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:372)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:347)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:4394)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:4379)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.access$1800(S3AFileSystem.java:268)
> at org.apache.hadoop.fs.s3a.S3AFileSystem$MkdirOperationCallbacksImpl.createFakeDirectory(S3AFileSystem.java:3469)
> at org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:159)
> at org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:57)
> at org.apache.hadoop.fs.s3a.impl.ExecutingStoreOperation.apply(ExecutingStoreOperation.java:76)
> at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
> at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
> at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2441)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2460)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:3435)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2456)
> at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:363)
> at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:205)
> at org.apache.hadoop.fs.s3a.AbstractS3ATestBase.setup(AbstractS3ATestBase.java:111)
> at org.apache.hadoop.fs.s3a.AbstractTestS3AEncryption.setup(AbstractTestS3AEncryption.java:94)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
> at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: com.amazonaws.services.s3.model.AmazonS3Exception
> at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1879)
> at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1418)
> at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1387)
> at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1157)
> at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:814)
> at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:781)
> at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:755)
> at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:715)
> at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:697)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:561)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:541)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5456)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5403)
> at com.amazonaws.services.s3.AmazonS3Client.access$300(AmazonS3Client.java:421)
> at com.amazonaws.services.s3.AmazonS3Client$PutObjectStrategy.invokeServiceCall(AmazonS3Client.java:6531)
> at com.amazonaws.services.s3.AmazonS3Client.uploadObject(AmazonS3Client.java:1861)
> at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1821)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$putObjectDirect$18(S3AFileSystem.java:2937)
> at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDurationOfSupplier(IOStatisticsBinding.java:651)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:2934)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$31(S3AFileSystem.java:4396)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:122)
> ... 37 more
> {code}
> What I believe is happening is it performs the superclass setup method which asserts that it can create a directory. If the S3-compatible endpoint does not support encryption, this check will fail causing the test to fail before skipping.
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org