You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by tm...@apache.org on 2018/08/11 05:37:16 UTC
[12/50] [abbrv] hadoop git commit: HADOOP-15583. Stabilize S3A
Assumed Role support. Contributed by Steve Loughran.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/da9a39ee/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/assumed_roles.md
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/assumed_roles.md b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/assumed_roles.md
index 3afd63f..8af0457 100644
--- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/assumed_roles.md
+++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/assumed_roles.md
@@ -29,7 +29,7 @@ assumed roles for different buckets.
*IAM Assumed Roles are unlikely to be supported by third-party systems
supporting the S3 APIs.*
-## Using IAM Assumed Roles
+## <a name="using_assumed_roles"></a> Using IAM Assumed Roles
### Before You Begin
@@ -40,6 +40,8 @@ are, how to configure their policies, etc.
* You need a pair of long-lived IAM User credentials, not the root account set.
* Have the AWS CLI installed, and test that it works there.
* Give the role access to S3, and, if using S3Guard, to DynamoDB.
+* For working with data encrypted with SSE-KMS, the role must
+have access to the appropriate KMS keys.
Trying to learn how IAM Assumed Roles work by debugging stack traces from
the S3A client is "suboptimal".
@@ -51,7 +53,7 @@ To use assumed roles, the client must be configured to use the
in the configuration option `fs.s3a.aws.credentials.provider`.
This AWS Credential provider will read in the `fs.s3a.assumed.role` options needed to connect to the
-Session Token Service [Assumed Role API](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html),
+Security Token Service [Assumed Role API](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html),
first authenticating with the full credentials, then assuming the specific role
specified. It will then refresh this login at the configured rate of
`fs.s3a.assumed.role.session.duration`
@@ -69,7 +71,7 @@ which uses `fs.s3a.access.key` and `fs.s3a.secret.key`.
Note: although you can list other AWS credential providers in to the
Assumed Role Credential Provider, it can only cause confusion.
-### <a name="using"></a> Using Assumed Roles
+### <a name="using"></a> Configuring Assumed Roles
To use assumed roles, the S3A client credentials provider must be set to
the `AssumedRoleCredentialProvider`, and `fs.s3a.assumed.role.arn` to
@@ -78,7 +80,6 @@ the previously created ARN.
```xml
<property>
<name>fs.s3a.aws.credentials.provider</name>
- <value>org.apache.hadoop.fs.s3a.AssumedRoleCredentialProvider</value>
<value>org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider</value>
</property>
@@ -159,7 +160,18 @@ Here are the full set of configuration options.
<name>fs.s3a.assumed.role.sts.endpoint</name>
<value/>
<description>
- AWS Simple Token Service Endpoint. If unset, uses the default endpoint.
+ AWS Security Token Service Endpoint. If unset, uses the default endpoint.
+ Only used if AssumedRoleCredentialProvider is the AWS credential provider.
+ </description>
+</property>
+
+<property>
+ <name>fs.s3a.assumed.role.sts.endpoint.region</name>
+ <value>us-west-1</value>
+ <description>
+ AWS Security Token Service Endpoint's region;
+ Needed if fs.s3a.assumed.role.sts.endpoint points to an endpoint
+ other than the default one and the v4 signature is used.
Only used if AssumedRoleCredentialProvider is the AWS credential provider.
</description>
</property>
@@ -194,39 +206,101 @@ These lists represent the minimum actions to which the client's principal
must have in order to work with a bucket.
-### Read Access Permissions
+### <a name="read-permissions"></a> Read Access Permissions
Permissions which must be granted when reading from a bucket:
-| Action | S3A operations |
-|--------|----------|
-| `s3:ListBucket` | `listStatus()`, `getFileStatus()` and elsewhere |
-| `s3:GetObject` | `getFileStatus()`, `open()` and elsewhere |
-| `s3:ListBucketMultipartUploads` | Aborting/cleaning up S3A commit operations|
+```
+s3:Get*
+s3:ListBucket
+```
+
+When using S3Guard, the client needs the appropriate
+<a href="s3guard-permissions">DynamoDB access permissions</a>
+
+To use SSE-KMS encryption, the client needs the
+<a href="sse-kms-permissions">SSE-KMS Permissions</a> to access the
+KMS key(s).
+
+### <a name="write-permissions"></a> Write Access Permissions
+
+These permissions must all be granted for write access:
+
+```
+s3:Get*
+s3:Delete*
+s3:Put*
+s3:ListBucket
+s3:ListBucketMultipartUploads
+s3:AbortMultipartUpload
+```
+
+### <a name="sse-kms-permissions"></a> SSE-KMS Permissions
+
+When to read data encrypted using SSE-KMS, the client must have
+ `kms:Decrypt` permission for the specific key a file was encrypted with.
+
+```
+kms:Decrypt
+```
+
+To write data using SSE-KMS, the client must have all the following permissions.
+
+```
+kms:Decrypt
+kms:GenerateDataKey
+```
+This includes renaming: renamed files are encrypted with the encryption key
+of the current S3A client; it must decrypt the source file first.
-The `s3:ListBucketMultipartUploads` is only needed when committing work
-via the [S3A committers](committers.html).
-However, it must be granted to the root path in order to safely clean up jobs.
-It is simplest to permit this in all buckets, even if it is only actually
-needed when writing data.
+If the caller doesn't have these permissions, the operation will fail with an
+`AccessDeniedException`: the S3 Store does not provide the specifics of
+the cause of the failure.
+### <a name="s3guard-permissions"></a> S3Guard Permissions
-### Write Access Permissions
+To use S3Guard, all clients must have a subset of the
+[AWS DynamoDB Permissions](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/api-permissions-reference.html).
-These permissions must *also* be granted for write access:
+To work with buckets protected with S3Guard, the client must have
+all the following rights on the DynamoDB Table used to protect that bucket.
+```
+dynamodb:BatchGetItem
+dynamodb:BatchWriteItem
+dynamodb:DeleteItem
+dynamodb:DescribeTable
+dynamodb:GetItem
+dynamodb:PutItem
+dynamodb:Query
+dynamodb:UpdateItem
+```
-| Action | S3A operations |
-|--------|----------|
-| `s3:PutObject` | `mkdir()`, `create()`, `rename()`, `delete()` |
-| `s3:DeleteObject` | `mkdir()`, `create()`, `rename()`, `delete()` |
-| `s3:AbortMultipartUpload` | S3A committer `abortJob()` and `cleanup()` operations |
-| `s3:ListMultipartUploadParts` | S3A committer `abortJob()` and `cleanup()` operations |
+This is true, *even if the client only has read access to the data*.
+For the `hadoop s3guard` table management commands, _extra_ permissions are required:
-### Mixed Permissions in a single S3 Bucket
+```
+dynamodb:CreateTable
+dynamodb:DescribeLimits
+dynamodb:DeleteTable
+dynamodb:Scan
+dynamodb:TagResource
+dynamodb:UntagResource
+dynamodb:UpdateTable
+```
+
+Without these permissions, tables cannot be created, destroyed or have their IO capacity
+changed through the `s3guard set-capacity` call.
+The `dynamodb:Scan` permission is needed for `s3guard prune`
+
+The `dynamodb:CreateTable` permission is needed by a client it tries to
+create the DynamoDB table on startup, that is
+`fs.s3a.s3guard.ddb.table.create` is `true` and the table does not already exist.
+
+### <a name="mixed-permissions"></a> Mixed Permissions in a single S3 Bucket
Mixing permissions down the "directory tree" is limited
only to the extent of supporting writeable directories under
@@ -274,7 +348,7 @@ This example has the base bucket read only, and a directory underneath,
"Action" : [
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
- "s3:GetObject"
+ "s3:Get*"
],
"Resource" : "arn:aws:s3:::example-bucket/*"
}, {
@@ -320,7 +394,7 @@ the command line before trying to use the S3A client.
`hadoop fs -mkdirs -p s3a://bucket/path/p1/`
-### <a name="no_role"></a>IOException: "Unset property fs.s3a.assumed.role.arn"
+### <a name="no_role"></a> IOException: "Unset property fs.s3a.assumed.role.arn"
The Assumed Role Credential Provider is enabled, but `fs.s3a.assumed.role.arn` is unset.
@@ -339,7 +413,7 @@ java.io.IOException: Unset property fs.s3a.assumed.role.arn
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
```
-### <a name="not_authorized_for_assumed_role"></a>"Not authorized to perform sts:AssumeRole"
+### <a name="not_authorized_for_assumed_role"></a> "Not authorized to perform sts:AssumeRole"
This can arise if the role ARN set in `fs.s3a.assumed.role.arn` is invalid
or one to which the caller has no access.
@@ -399,7 +473,8 @@ Caused by: com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceExc
The value of `fs.s3a.assumed.role.session.duration` is out of range.
```
-java.lang.IllegalArgumentException: Assume Role session duration should be in the range of 15min - 1Hr
+java.lang.IllegalArgumentException: Assume Role session duration should be in the range of 15min
+- 1Hr
at com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider$Builder.withRoleSessionDurationSeconds(STSAssumeRoleSessionCredentialsProvider.java:437)
at org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider.<init>(AssumedRoleCredentialProvider.java:86)
```
@@ -603,7 +678,7 @@ Caused by: com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceExc
### <a name="invalid_token"></a> `AccessDeniedException/InvalidClientTokenId`: "The security token included in the request is invalid"
-The credentials used to authenticate with the AWS Simple Token Service are invalid.
+The credentials used to authenticate with the AWS Security Token Service are invalid.
```
[ERROR] Failures:
@@ -682,26 +757,7 @@ org.apache.hadoop.fs.s3a.AWSBadRequestException: Instantiate org.apache.hadoop.f
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
- at org.apache.hadoop.fs.s3a.ITestAssumeRole.lambda$expectFileSystemFailure$0(ITestAssumeRole.java:70)
- at org.apache.hadoop.fs.s3a.ITestAssumeRole.lambda$interceptC$1(ITestAssumeRole.java:84)
- at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:491)
- at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:377)
- at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:446)
- at org.apache.hadoop.fs.s3a.ITestAssumeRole.interceptC(ITestAssumeRole.java:82)
- at org.apache.hadoop.fs.s3a.ITestAssumeRole.expectFileSystemFailure(ITestAssumeRole.java:68)
- at org.apache.hadoop.fs.s3a.ITestAssumeRole.testAssumeRoleBadSession(ITestAssumeRole.java:216)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:498)
- at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
- at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
- at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
- at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
- at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
- at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
- at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
- at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
+
Caused by: com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
1 validation error detected: Value 'Session Names cannot Hava Spaces!' at 'roleSessionName'
failed to satisfy constraint:
@@ -742,10 +798,11 @@ Caused by: com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceExc
### <a name="access_denied"></a> `java.nio.file.AccessDeniedException` within a FileSystem API call
If an operation fails with an `AccessDeniedException`, then the role does not have
-the permission for the S3 Operation invoked during the call
+the permission for the S3 Operation invoked during the call.
```
-java.nio.file.AccessDeniedException: s3a://bucket/readonlyDir: rename(s3a://bucket/readonlyDir, s3a://bucket/renameDest)
+java.nio.file.AccessDeniedException: s3a://bucket/readonlyDir:
+ rename(s3a://bucket/readonlyDir, s3a://bucket/renameDest)
on s3a://bucket/readonlyDir:
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied
(Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 2805F2ABF5246BB1;
@@ -795,3 +852,33 @@ check the path for the operation.
Make sure that all the read and write permissions are allowed for any bucket/path
to which data is being written to, and read permissions for all
buckets read from.
+
+If the bucket is using SSE-KMS to encrypt data:
+
+1. The caller must have the `kms:Decrypt` permission to read the data.
+1. The caller needs `kms:Decrypt` and `kms:GenerateDataKey`.
+
+Without permissions, the request fails *and there is no explicit message indicating
+that this is an encryption-key issue*.
+
+### <a name="dynamodb_exception"></a> `AccessDeniedException` + `AmazonDynamoDBException`
+
+```
+java.nio.file.AccessDeniedException: bucket1:
+ com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException:
+ User: arn:aws:sts::980678866538:assumed-role/s3guard-test-role/test is not authorized to perform:
+ dynamodb:DescribeTable on resource: arn:aws:dynamodb:us-west-1:980678866538:table/bucket1
+ (Service: AmazonDynamoDBv2; Status Code: 400;
+```
+
+The caller is trying to access an S3 bucket which uses S3Guard, but the caller
+lacks the relevant DynamoDB access permissions.
+
+The `dynamodb:DescribeTable` operation is the first one used in S3Guard to access,
+the DynamoDB table, so it is often the first to fail. It can be a sign
+that the role has no permissions at all to access the table named in the exception,
+or just that this specific permission has been omitted.
+
+If the role policy requested for the assumed role didn't ask for any DynamoDB
+permissions, this is where all attempts to work with a S3Guarded bucket will
+fail. Check the value of `fs.s3a.assumed.role.policy`
http://git-wip-us.apache.org/repos/asf/hadoop/blob/da9a39ee/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
index 7d0f67b..2dee10a 100644
--- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
+++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
@@ -33,7 +33,7 @@ See also:
* [Working with IAM Assumed Roles](./assumed_roles.html)
* [Testing](./testing.html)
-##<a name="overview"></a> Overview
+## <a name="overview"></a> Overview
Apache Hadoop's `hadoop-aws` module provides support for AWS integration.
applications to easily use this support.
@@ -88,7 +88,7 @@ maintain it.
This connector is no longer available: users must migrate to the newer `s3a:` client.
-##<a name="getting_started"></a> Getting Started
+## <a name="getting_started"></a> Getting Started
S3A depends upon two JARs, alongside `hadoop-common` and its dependencies.
@@ -1698,6 +1698,6 @@ as configured by the value `fs.s3a.multipart.size`.
To disable checksum verification in `distcp`, use the `-skipcrccheck` option:
```bash
-hadoop distcp -update -skipcrccheck /user/alice/datasets s3a://alice-backup/datasets
+hadoop distcp -update -skipcrccheck -numListstatusThreads 40 /user/alice/datasets s3a://alice-backup/datasets
```
http://git-wip-us.apache.org/repos/asf/hadoop/blob/da9a39ee/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
index aa6b5d8..3214c76 100644
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
+++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
@@ -36,14 +36,6 @@ import org.junit.rules.Timeout;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
-import static org.apache.hadoop.fs.s3a.S3ATestConstants.TEST_FS_S3A_NAME;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertNotEquals;
-import static org.junit.Assert.assertNotNull;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.fail;
-
import java.io.File;
import java.net.URI;
import java.security.PrivilegedExceptionAction;
@@ -60,6 +52,9 @@ import org.junit.rules.TemporaryFolder;
import static org.apache.hadoop.fs.s3a.Constants.*;
import static org.apache.hadoop.fs.s3a.S3AUtils.*;
import static org.apache.hadoop.fs.s3a.S3ATestUtils.*;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+import static org.apache.hadoop.fs.s3a.S3ATestConstants.TEST_FS_S3A_NAME;
+import static org.junit.Assert.*;
/**
* S3A tests for configuration.
@@ -134,12 +129,26 @@ public class ITestS3AConfiguration {
conf.setInt(Constants.PROXY_PORT, 1);
String proxy =
conf.get(Constants.PROXY_HOST) + ":" + conf.get(Constants.PROXY_PORT);
- try {
- fs = S3ATestUtils.createTestFileSystem(conf);
- fail("Expected a connection error for proxy server at " + proxy);
- } catch (AWSClientIOException e) {
- // expected
- }
+ expectFSCreateFailure(AWSClientIOException.class,
+ conf, "when using proxy " + proxy);
+ }
+
+ /**
+ * Expect a filesystem to not be created from a configuration
+ * @return the exception intercepted
+ * @throws Exception any other exception
+ */
+ private <E extends Throwable> E expectFSCreateFailure(
+ Class<E> clazz,
+ Configuration conf,
+ String text)
+ throws Exception {
+
+ return intercept(clazz,
+ () -> {
+ fs = S3ATestUtils.createTestFileSystem(conf);
+ return "expected failure creating FS " + text + " got " + fs;
+ });
}
@Test
@@ -148,15 +157,13 @@ public class ITestS3AConfiguration {
conf.unset(Constants.PROXY_HOST);
conf.setInt(Constants.MAX_ERROR_RETRIES, 2);
conf.setInt(Constants.PROXY_PORT, 1);
- try {
- fs = S3ATestUtils.createTestFileSystem(conf);
- fail("Expected a proxy configuration error");
- } catch (IllegalArgumentException e) {
- String msg = e.toString();
- if (!msg.contains(Constants.PROXY_HOST) &&
- !msg.contains(Constants.PROXY_PORT)) {
- throw e;
- }
+ IllegalArgumentException e = expectFSCreateFailure(
+ IllegalArgumentException.class,
+ conf, "Expected a connection error for proxy server");
+ String msg = e.toString();
+ if (!msg.contains(Constants.PROXY_HOST) &&
+ !msg.contains(Constants.PROXY_PORT)) {
+ throw e;
}
}
@@ -167,19 +174,11 @@ public class ITestS3AConfiguration {
conf.setInt(Constants.MAX_ERROR_RETRIES, 2);
conf.set(Constants.PROXY_HOST, "127.0.0.1");
conf.set(Constants.SECURE_CONNECTIONS, "true");
- try {
- fs = S3ATestUtils.createTestFileSystem(conf);
- fail("Expected a connection error for proxy server");
- } catch (AWSClientIOException e) {
- // expected
- }
+ expectFSCreateFailure(AWSClientIOException.class,
+ conf, "Expected a connection error for proxy server");
conf.set(Constants.SECURE_CONNECTIONS, "false");
- try {
- fs = S3ATestUtils.createTestFileSystem(conf);
- fail("Expected a connection error for proxy server");
- } catch (AWSClientIOException e) {
- // expected
- }
+ expectFSCreateFailure(AWSClientIOException.class,
+ conf, "Expected a connection error for proxy server");
}
@Test
@@ -189,31 +188,31 @@ public class ITestS3AConfiguration {
conf.set(Constants.PROXY_HOST, "127.0.0.1");
conf.setInt(Constants.PROXY_PORT, 1);
conf.set(Constants.PROXY_USERNAME, "user");
- try {
- fs = S3ATestUtils.createTestFileSystem(conf);
- fail("Expected a connection error for proxy server");
- } catch (IllegalArgumentException e) {
- String msg = e.toString();
- if (!msg.contains(Constants.PROXY_USERNAME) &&
- !msg.contains(Constants.PROXY_PASSWORD)) {
- throw e;
- }
+ IllegalArgumentException e = expectFSCreateFailure(
+ IllegalArgumentException.class,
+ conf, "Expected a connection error for proxy server");
+ assertIsProxyUsernameError(e);
+ }
+
+ private void assertIsProxyUsernameError(final IllegalArgumentException e) {
+ String msg = e.toString();
+ if (!msg.contains(Constants.PROXY_USERNAME) &&
+ !msg.contains(Constants.PROXY_PASSWORD)) {
+ throw e;
}
+ }
+
+ @Test
+ public void testUsernameInconsistentWithPassword2() throws Exception {
conf = new Configuration();
conf.setInt(Constants.MAX_ERROR_RETRIES, 2);
conf.set(Constants.PROXY_HOST, "127.0.0.1");
conf.setInt(Constants.PROXY_PORT, 1);
conf.set(Constants.PROXY_PASSWORD, "password");
- try {
- fs = S3ATestUtils.createTestFileSystem(conf);
- fail("Expected a connection error for proxy server");
- } catch (IllegalArgumentException e) {
- String msg = e.toString();
- if (!msg.contains(Constants.PROXY_USERNAME) &&
- !msg.contains(Constants.PROXY_PASSWORD)) {
- throw e;
- }
- }
+ IllegalArgumentException e = expectFSCreateFailure(
+ IllegalArgumentException.class,
+ conf, "Expected a connection error for proxy server");
+ assertIsProxyUsernameError(e);
}
@Test
@@ -393,7 +392,7 @@ public class ITestS3AConfiguration {
// Catch/pass standard path style access behaviour when live bucket
// isn't in the same region as the s3 client default. See
// http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html
- assertEquals(e.getStatusCode(), HttpStatus.SC_MOVED_PERMANENTLY);
+ assertEquals(HttpStatus.SC_MOVED_PERMANENTLY, e.getStatusCode());
}
}
@@ -428,8 +427,16 @@ public class ITestS3AConfiguration {
public void testCloseIdempotent() throws Throwable {
conf = new Configuration();
fs = S3ATestUtils.createTestFileSystem(conf);
+ AWSCredentialProviderList credentials =
+ fs.shareCredentials("testCloseIdempotent");
+ credentials.close();
fs.close();
+ assertTrue("Closing FS didn't close credentials " + credentials,
+ credentials.isClosed());
+ assertEquals("refcount not zero in " + credentials, 0, credentials.getRefCount());
fs.close();
+ // and the numbers should not change
+ assertEquals("refcount not zero in " + credentials, 0, credentials.getRefCount());
}
@Test
http://git-wip-us.apache.org/repos/asf/hadoop/blob/da9a39ee/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ATemporaryCredentials.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ATemporaryCredentials.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ATemporaryCredentials.java
index 44a2beb..afc4086 100644
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ATemporaryCredentials.java
+++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ATemporaryCredentials.java
@@ -19,15 +19,14 @@
package org.apache.hadoop.fs.s3a;
import java.io.IOException;
-import java.net.URI;
-import com.amazonaws.auth.AWSCredentialsProvider;
-import com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient;
+import com.amazonaws.services.securitytoken.AWSSecurityTokenService;
+import com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClientBuilder;
import com.amazonaws.services.securitytoken.model.GetSessionTokenRequest;
import com.amazonaws.services.securitytoken.model.GetSessionTokenResult;
import com.amazonaws.services.securitytoken.model.Credentials;
-import org.apache.hadoop.fs.s3native.S3xLoginHelper;
+import org.apache.hadoop.fs.s3a.auth.STSClientFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.test.LambdaTestUtils;
@@ -55,6 +54,14 @@ public class ITestS3ATemporaryCredentials extends AbstractS3ATestBase {
private static final long TEST_FILE_SIZE = 1024;
+ private AWSCredentialProviderList credentials;
+
+ @Override
+ public void teardown() throws Exception {
+ S3AUtils.closeAutocloseables(LOG, credentials);
+ super.teardown();
+ }
+
/**
* Test use of STS for requesting temporary credentials.
*
@@ -63,7 +70,7 @@ public class ITestS3ATemporaryCredentials extends AbstractS3ATestBase {
* S3A tests to request temporary credentials, then attempt to use those
* credentials instead.
*
- * @throws IOException
+ * @throws IOException failure
*/
@Test
public void testSTS() throws IOException {
@@ -71,21 +78,20 @@ public class ITestS3ATemporaryCredentials extends AbstractS3ATestBase {
if (!conf.getBoolean(TEST_STS_ENABLED, true)) {
skip("STS functional tests disabled");
}
-
- S3xLoginHelper.Login login = S3AUtils.getAWSAccessKeys(
- URI.create("s3a://foobar"), conf);
- if (!login.hasLogin()) {
- skip("testSTS disabled because AWS credentials not configured");
- }
- AWSCredentialsProvider parentCredentials = new BasicAWSCredentialsProvider(
- login.getUser(), login.getPassword());
-
- String stsEndpoint = conf.getTrimmed(TEST_STS_ENDPOINT, "");
- AWSSecurityTokenServiceClient stsClient;
- stsClient = new AWSSecurityTokenServiceClient(parentCredentials);
- if (!stsEndpoint.isEmpty()) {
- LOG.debug("STS Endpoint ={}", stsEndpoint);
- stsClient.setEndpoint(stsEndpoint);
+ S3AFileSystem testFS = getFileSystem();
+ credentials = testFS.shareCredentials("testSTS");
+
+ String bucket = testFS.getBucket();
+ AWSSecurityTokenServiceClientBuilder builder = STSClientFactory.builder(
+ conf,
+ bucket,
+ credentials,
+ conf.getTrimmed(TEST_STS_ENDPOINT, ""), "");
+ AWSSecurityTokenService stsClient = builder.build();
+
+ if (!conf.getTrimmed(TEST_STS_ENDPOINT, "").isEmpty()) {
+ LOG.debug("STS Endpoint ={}", conf.getTrimmed(TEST_STS_ENDPOINT, ""));
+ stsClient.setEndpoint(conf.getTrimmed(TEST_STS_ENDPOINT, ""));
}
GetSessionTokenRequest sessionTokenRequest = new GetSessionTokenRequest();
sessionTokenRequest.setDurationSeconds(900);
@@ -93,23 +99,28 @@ public class ITestS3ATemporaryCredentials extends AbstractS3ATestBase {
sessionTokenResult = stsClient.getSessionToken(sessionTokenRequest);
Credentials sessionCreds = sessionTokenResult.getCredentials();
- String childAccessKey = sessionCreds.getAccessKeyId();
- conf.set(ACCESS_KEY, childAccessKey);
- String childSecretKey = sessionCreds.getSecretAccessKey();
- conf.set(SECRET_KEY, childSecretKey);
- String sessionToken = sessionCreds.getSessionToken();
- conf.set(SESSION_TOKEN, sessionToken);
+ // clone configuration so changes here do not affect the base FS.
+ Configuration conf2 = new Configuration(conf);
+ S3AUtils.clearBucketOption(conf2, bucket, AWS_CREDENTIALS_PROVIDER);
+ S3AUtils.clearBucketOption(conf2, bucket, ACCESS_KEY);
+ S3AUtils.clearBucketOption(conf2, bucket, SECRET_KEY);
+ S3AUtils.clearBucketOption(conf2, bucket, SESSION_TOKEN);
+
+ conf2.set(ACCESS_KEY, sessionCreds.getAccessKeyId());
+ conf2.set(SECRET_KEY, sessionCreds.getSecretAccessKey());
+ conf2.set(SESSION_TOKEN, sessionCreds.getSessionToken());
- conf.set(AWS_CREDENTIALS_PROVIDER, PROVIDER_CLASS);
+ conf2.set(AWS_CREDENTIALS_PROVIDER, PROVIDER_CLASS);
- try(S3AFileSystem fs = S3ATestUtils.createTestFileSystem(conf)) {
+ // with valid credentials, we can set properties.
+ try(S3AFileSystem fs = S3ATestUtils.createTestFileSystem(conf2)) {
createAndVerifyFile(fs, path("testSTS"), TEST_FILE_SIZE);
}
// now create an invalid set of credentials by changing the session
// token
- conf.set(SESSION_TOKEN, "invalid-" + sessionToken);
- try (S3AFileSystem fs = S3ATestUtils.createTestFileSystem(conf)) {
+ conf2.set(SESSION_TOKEN, "invalid-" + sessionCreds.getSessionToken());
+ try (S3AFileSystem fs = S3ATestUtils.createTestFileSystem(conf2)) {
createAndVerifyFile(fs, path("testSTSInvalidToken"), TEST_FILE_SIZE);
fail("Expected an access exception, but file access to "
+ fs.getUri() + " was allowed: " + fs);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/da9a39ee/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardListConsistency.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardListConsistency.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardListConsistency.java
index 763819b..a1df1a5 100644
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardListConsistency.java
+++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardListConsistency.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.fs.s3a;
import com.amazonaws.services.s3.model.ListObjectsV2Request;
import com.amazonaws.services.s3.model.ListObjectsV2Result;
+
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
@@ -28,6 +29,7 @@ import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.RemoteIterator;
import org.apache.hadoop.fs.contract.AbstractFSContract;
import org.apache.hadoop.fs.contract.s3a.S3AContract;
+
import org.junit.Assume;
import org.junit.Test;
@@ -37,6 +39,7 @@ import java.util.ArrayList;
import java.util.Collection;
import java.util.HashSet;
import java.util.List;
+import java.util.stream.Collectors;
import static org.apache.hadoop.fs.contract.ContractTestUtils.touch;
import static org.apache.hadoop.fs.contract.ContractTestUtils.writeTextFile;
@@ -71,7 +74,9 @@ public class ITestS3GuardListConsistency extends AbstractS3ATestBase {
// Other configs would break test assumptions
conf.set(FAIL_INJECT_INCONSISTENCY_KEY, DEFAULT_DELAY_KEY_SUBSTRING);
conf.setFloat(FAIL_INJECT_INCONSISTENCY_PROBABILITY, 1.0f);
- conf.setLong(FAIL_INJECT_INCONSISTENCY_MSEC, DEFAULT_DELAY_KEY_MSEC);
+ // this is a long value to guarantee that the inconsistency holds
+ // even over long-haul connections, and in the debugger too/
+ conf.setLong(FAIL_INJECT_INCONSISTENCY_MSEC, (long) (60 * 1000));
return new S3AContract(conf);
}
@@ -524,37 +529,60 @@ public class ITestS3GuardListConsistency extends AbstractS3ATestBase {
ListObjectsV2Result postDeleteDelimited = listObjectsV2(fs, key, "/");
ListObjectsV2Result postDeleteUndelimited = listObjectsV2(fs, key, null);
-
- assertEquals("InconsistentAmazonS3Client added back objects incorrectly " +
+ assertListSizeEqual(
+ "InconsistentAmazonS3Client added back objects incorrectly " +
"in a non-recursive listing",
- preDeleteDelimited.getObjectSummaries().size(),
- postDeleteDelimited.getObjectSummaries().size()
- );
- assertEquals("InconsistentAmazonS3Client added back prefixes incorrectly " +
+ preDeleteDelimited.getObjectSummaries(),
+ postDeleteDelimited.getObjectSummaries());
+
+ assertListSizeEqual("InconsistentAmazonS3Client added back prefixes incorrectly " +
"in a non-recursive listing",
- preDeleteDelimited.getCommonPrefixes().size(),
- postDeleteDelimited.getCommonPrefixes().size()
+ preDeleteDelimited.getCommonPrefixes(),
+ postDeleteDelimited.getCommonPrefixes()
);
- assertEquals("InconsistentAmazonS3Client added back objects incorrectly " +
+ assertListSizeEqual("InconsistentAmazonS3Client added back objects incorrectly " +
"in a recursive listing",
- preDeleteUndelimited.getObjectSummaries().size(),
- postDeleteUndelimited.getObjectSummaries().size()
+ preDeleteUndelimited.getObjectSummaries(),
+ postDeleteUndelimited.getObjectSummaries()
);
- assertEquals("InconsistentAmazonS3Client added back prefixes incorrectly " +
+
+ assertListSizeEqual("InconsistentAmazonS3Client added back prefixes incorrectly " +
"in a recursive listing",
- preDeleteUndelimited.getCommonPrefixes().size(),
- postDeleteUndelimited.getCommonPrefixes().size()
+ preDeleteUndelimited.getCommonPrefixes(),
+ postDeleteUndelimited.getCommonPrefixes()
);
}
/**
- * retrying v2 list.
- * @param fs
- * @param key
- * @param delimiter
- * @return
+ * Assert that the two list sizes match; failure message includes the lists.
+ * @param message text for the assertion
+ * @param expected expected list
+ * @param actual actual list
+ * @param <T> type of list
+ */
+ private <T> void assertListSizeEqual(String message,
+ List<T> expected,
+ List<T> actual) {
+ String leftContents = expected.stream()
+ .map(n -> n.toString())
+ .collect(Collectors.joining("\n"));
+ String rightContents = actual.stream()
+ .map(n -> n.toString())
+ .collect(Collectors.joining("\n"));
+ String summary = "\nExpected:" + leftContents
+ + "\n-----------\nActual:" + rightContents;
+ assertEquals(message + summary, expected.size(), actual.size());
+ }
+
+ /**
+ * Retrying v2 list directly through the s3 client.
+ * @param fs filesystem
+ * @param key key to list under
+ * @param delimiter any delimiter
+ * @return the listing
* @throws IOException on error
*/
+ @Retries.RetryRaw
private ListObjectsV2Result listObjectsV2(S3AFileSystem fs,
String key, String delimiter) throws IOException {
ListObjectsV2Request k = fs.createListObjectsRequest(key, delimiter)
http://git-wip-us.apache.org/repos/asf/hadoop/blob/da9a39ee/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardWriteBack.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardWriteBack.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardWriteBack.java
index c8a54b8..d5cd4d4 100644
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardWriteBack.java
+++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardWriteBack.java
@@ -65,11 +65,12 @@ public class ITestS3GuardWriteBack extends AbstractS3ATestBase {
// delete the existing directory (in case of last test failure)
noS3Guard.delete(directory, true);
// Create a directory on S3 only
- noS3Guard.mkdirs(new Path(directory, "OnS3"));
+ Path onS3 = new Path(directory, "OnS3");
+ noS3Guard.mkdirs(onS3);
// Create a directory on both S3 and metadata store
- Path p = new Path(directory, "OnS3AndMS");
- ContractTestUtils.assertPathDoesNotExist(noWriteBack, "path", p);
- noWriteBack.mkdirs(p);
+ Path onS3AndMS = new Path(directory, "OnS3AndMS");
+ ContractTestUtils.assertPathDoesNotExist(noWriteBack, "path", onS3AndMS);
+ noWriteBack.mkdirs(onS3AndMS);
FileStatus[] fsResults;
DirListingMetadata mdResults;
@@ -83,6 +84,8 @@ public class ITestS3GuardWriteBack extends AbstractS3ATestBase {
// Metadata store without write-back should still only contain /OnS3AndMS,
// because newly discovered /OnS3 is not written back to metadata store
mdResults = noWriteBack.getMetadataStore().listChildren(directory);
+ assertNotNull("No results from noWriteBack listChildren " + directory,
+ mdResults);
assertEquals("Metadata store without write back should still only know "
+ "about /OnS3AndMS, but it has: " + mdResults,
1, mdResults.numEntries());
@@ -102,8 +105,7 @@ public class ITestS3GuardWriteBack extends AbstractS3ATestBase {
// If we don't clean this up, the next test run will fail because it will
// have recorded /OnS3 being deleted even after it's written to noS3Guard.
- getFileSystem().getMetadataStore().forgetMetadata(
- new Path(directory, "OnS3"));
+ getFileSystem().getMetadataStore().forgetMetadata(onS3);
}
/**
@@ -118,26 +120,33 @@ public class ITestS3GuardWriteBack extends AbstractS3ATestBase {
// Create a FileSystem that is S3-backed only
conf = createConfiguration();
- S3ATestUtils.disableFilesystemCaching(conf);
String host = fsURI.getHost();
- if (disableS3Guard) {
- conf.set(Constants.S3_METADATA_STORE_IMPL,
- Constants.S3GUARD_METASTORE_NULL);
- S3AUtils.setBucketOption(conf, host,
- S3_METADATA_STORE_IMPL,
- S3GUARD_METASTORE_NULL);
- } else {
- S3ATestUtils.maybeEnableS3Guard(conf);
- conf.setBoolean(METADATASTORE_AUTHORITATIVE, authoritativeMeta);
- S3AUtils.setBucketOption(conf, host,
- METADATASTORE_AUTHORITATIVE,
- Boolean.toString(authoritativeMeta));
- S3AUtils.setBucketOption(conf, host,
- S3_METADATA_STORE_IMPL,
- conf.get(S3_METADATA_STORE_IMPL));
+ String metastore;
+
+ metastore = S3GUARD_METASTORE_NULL;
+ if (!disableS3Guard) {
+ // pick up the metadata store used by the main test
+ metastore = getFileSystem().getConf().get(S3_METADATA_STORE_IMPL);
+ assertNotEquals(S3GUARD_METASTORE_NULL, metastore);
}
- FileSystem fs = FileSystem.get(fsURI, conf);
- return asS3AFS(fs);
+
+ conf.set(Constants.S3_METADATA_STORE_IMPL, metastore);
+ conf.setBoolean(METADATASTORE_AUTHORITATIVE, authoritativeMeta);
+ S3AUtils.setBucketOption(conf, host,
+ METADATASTORE_AUTHORITATIVE,
+ Boolean.toString(authoritativeMeta));
+ S3AUtils.setBucketOption(conf, host,
+ S3_METADATA_STORE_IMPL, metastore);
+
+ S3AFileSystem fs = asS3AFS(FileSystem.newInstance(fsURI, conf));
+ // do a check to verify that everything got through
+ assertEquals("Metadata store should have been disabled: " + fs,
+ disableS3Guard, !fs.hasMetadataStore());
+ assertEquals("metastore option did not propagate",
+ metastore, fs.getConf().get(S3_METADATA_STORE_IMPL));
+
+ return fs;
+
}
private static S3AFileSystem asS3AFS(FileSystem fs) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/da9a39ee/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/MockS3ClientFactory.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/MockS3ClientFactory.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/MockS3ClientFactory.java
index b746bfe5..dbf228d 100644
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/MockS3ClientFactory.java
+++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/MockS3ClientFactory.java
@@ -23,6 +23,7 @@ import static org.mockito.Mockito.*;
import java.net.URI;
import java.util.ArrayList;
+import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.model.MultipartUploadListing;
import com.amazonaws.services.s3.model.Region;
@@ -34,8 +35,9 @@ import com.amazonaws.services.s3.model.Region;
public class MockS3ClientFactory implements S3ClientFactory {
@Override
- public AmazonS3 createS3Client(URI name) {
- String bucket = name.getHost();
+ public AmazonS3 createS3Client(URI name,
+ final String bucket,
+ final AWSCredentialsProvider credentialSet) {
AmazonS3 s3 = mock(AmazonS3.class);
when(s3.doesBucketExist(bucket)).thenReturn(true);
// this listing is used in startup if purging is enabled, so
http://git-wip-us.apache.org/repos/asf/hadoop/blob/da9a39ee/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java
index d731ae7..b28925c 100644
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java
+++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.fs.s3a;
import java.io.IOException;
import java.net.URI;
+import java.nio.file.AccessDeniedException;
import java.util.Arrays;
import java.util.List;
@@ -34,11 +35,15 @@ import org.junit.rules.ExpectedException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider;
+import org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException;
+import org.apache.hadoop.io.retry.RetryPolicy;
+import org.apache.hadoop.test.GenericTestUtils;
import static org.apache.hadoop.fs.s3a.Constants.*;
import static org.apache.hadoop.fs.s3a.S3ATestConstants.*;
import static org.apache.hadoop.fs.s3a.S3ATestUtils.*;
import static org.apache.hadoop.fs.s3a.S3AUtils.*;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
import static org.junit.Assert.*;
/**
@@ -221,14 +226,13 @@ public class TestS3AAWSCredentialsProvider {
}
private void expectProviderInstantiationFailure(String option,
- String expectedErrorText) throws IOException {
+ String expectedErrorText) throws Exception {
Configuration conf = new Configuration();
conf.set(AWS_CREDENTIALS_PROVIDER, option);
Path testFile = new Path(
conf.getTrimmed(KEY_CSVTEST_FILE, DEFAULT_CSVTEST_FILE));
- expectException(IOException.class, expectedErrorText);
- URI uri = testFile.toUri();
- S3AUtils.createAWSCredentialProviderSet(uri, conf);
+ intercept(IOException.class, expectedErrorText,
+ () -> S3AUtils.createAWSCredentialProviderSet(testFile.toUri(), conf));
}
/**
@@ -288,4 +292,68 @@ public class TestS3AAWSCredentialsProvider {
authenticationContains(conf, AssumedRoleCredentialProvider.NAME));
}
+ @Test
+ public void testExceptionLogic() throws Throwable {
+ AWSCredentialProviderList providers
+ = new AWSCredentialProviderList();
+ // verify you can't get credentials from it
+ NoAuthWithAWSException noAuth = intercept(NoAuthWithAWSException.class,
+ AWSCredentialProviderList.NO_AWS_CREDENTIAL_PROVIDERS,
+ () -> providers.getCredentials());
+ // but that it closes safely
+ providers.close();
+
+ S3ARetryPolicy retryPolicy = new S3ARetryPolicy(new Configuration());
+ assertEquals("Expected no retry on auth failure",
+ RetryPolicy.RetryAction.FAIL.action,
+ retryPolicy.shouldRetry(noAuth, 0, 0, true).action);
+
+ try {
+ throw S3AUtils.translateException("login", "", noAuth);
+ } catch (AccessDeniedException expected) {
+ // this is what we want; other exceptions will be passed up
+ assertEquals("Expected no retry on AccessDeniedException",
+ RetryPolicy.RetryAction.FAIL.action,
+ retryPolicy.shouldRetry(expected, 0, 0, true).action);
+ }
+
+ }
+
+ @Test
+ public void testRefCounting() throws Throwable {
+ AWSCredentialProviderList providers
+ = new AWSCredentialProviderList();
+ assertEquals("Ref count for " + providers,
+ 1, providers.getRefCount());
+ AWSCredentialProviderList replicate = providers.share();
+ assertEquals(providers, replicate);
+ assertEquals("Ref count after replication for " + providers,
+ 2, providers.getRefCount());
+ assertFalse("Was closed " + providers, providers.isClosed());
+ providers.close();
+ assertFalse("Was closed " + providers, providers.isClosed());
+ assertEquals("Ref count after close() for " + providers,
+ 1, providers.getRefCount());
+
+ // this should now close it
+ providers.close();
+ assertTrue("Was not closed " + providers, providers.isClosed());
+ assertEquals("Ref count after close() for " + providers,
+ 0, providers.getRefCount());
+ assertEquals("Ref count after second close() for " + providers,
+ 0, providers.getRefCount());
+ intercept(IllegalStateException.class, "closed",
+ () -> providers.share());
+ // final call harmless
+ providers.close();
+ assertEquals("Ref count after close() for " + providers,
+ 0, providers.getRefCount());
+ providers.refresh();
+
+ intercept(NoAuthWithAWSException.class,
+ AWSCredentialProviderList.CREDENTIALS_REQUESTED_WHEN_CLOSED,
+ () -> providers.getCredentials());
+ }
+
+
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/da9a39ee/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
index c6985b0..7451ef1 100644
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
+++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
@@ -61,6 +61,7 @@ import static org.apache.hadoop.fs.s3a.auth.RoleTestUtils.*;
import static org.apache.hadoop.fs.s3a.auth.RoleModel.*;
import static org.apache.hadoop.fs.s3a.auth.RolePolicies.*;
import static org.apache.hadoop.fs.s3a.auth.RoleTestUtils.forbidden;
+import static org.apache.hadoop.test.GenericTestUtils.assertExceptionContains;
import static org.apache.hadoop.test.LambdaTestUtils.*;
/**
@@ -85,6 +86,24 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
*/
private S3AFileSystem roleFS;
+ /**
+ * Duration range exception text on SDKs which check client-side.
+ */
+ protected static final String E_DURATION_RANGE_1
+ = "Assume Role session duration should be in the range of 15min - 1Hr";
+
+ /**
+ * Duration range too high text on SDKs which check on the server.
+ */
+ protected static final String E_DURATION_RANGE_2
+ = "Member must have value less than or equal to 43200";
+
+ /**
+ * Duration range too low text on SDKs which check on the server.
+ */
+ protected static final String E_DURATION_RANGE_3
+ = "Member must have value greater than or equal to 900";
+
@Override
public void setup() throws Exception {
super.setup();
@@ -112,13 +131,14 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
* @param clazz class of exception to expect
* @param text text in exception
* @param <E> type of exception as inferred from clazz
+ * @return the caught exception if it was of the expected type and contents
* @throws Exception if the exception was the wrong class
*/
- private <E extends Throwable> void expectFileSystemCreateFailure(
+ private <E extends Throwable> E expectFileSystemCreateFailure(
Configuration conf,
Class<E> clazz,
String text) throws Exception {
- interceptClosing(clazz,
+ return interceptClosing(clazz,
text,
() -> new Path(getFileSystem().getUri()).getFileSystem(conf));
}
@@ -246,6 +266,60 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
"Member must satisfy regular expression pattern");
}
+ /**
+ * A duration >1h is forbidden client-side in AWS SDK 1.11.271;
+ * with the ability to extend durations deployed in March 2018,
+ * duration checks will need to go server-side, and, presumably,
+ * later SDKs will remove the client side checks.
+ * This code exists to see when this happens.
+ */
+ @Test
+ public void testAssumeRoleThreeHourSessionDuration() throws Exception {
+ describe("Try to authenticate with a long session duration");
+
+ Configuration conf = createAssumedRoleConfig();
+ // add a duration of three hours
+ conf.setInt(ASSUMED_ROLE_SESSION_DURATION, 3 * 60 * 60);
+ try {
+ new Path(getFileSystem().getUri()).getFileSystem(conf).close();
+ LOG.info("Successfully created token of a duration >3h");
+ } catch (IOException ioe) {
+ assertExceptionContains(E_DURATION_RANGE_1, ioe);
+ }
+ }
+
+ /**
+ * A duration >1h is forbidden client-side in AWS SDK 1.11.271;
+ * with the ability to extend durations deployed in March 2018.
+ * with the later SDKs, the checks go server-side and
+ * later SDKs will remove the client side checks.
+ * This test asks for a duration which will still be rejected, and
+ * looks for either of the error messages raised.
+ */
+ @Test
+ public void testAssumeRoleThirtySixHourSessionDuration() throws Exception {
+ describe("Try to authenticate with a long session duration");
+
+ Configuration conf = createAssumedRoleConfig();
+ conf.setInt(ASSUMED_ROLE_SESSION_DURATION, 36 * 60 * 60);
+ IOException ioe = expectFileSystemCreateFailure(conf,
+ IOException.class, null);
+ assertIsRangeException(ioe);
+ }
+
+ /**
+ * Look for either the client-side or STS-side range exception
+ * @param e exception
+ * @throws Exception the exception, if its text doesn't match
+ */
+ private void assertIsRangeException(final Exception e) throws Exception {
+ String message = e.toString();
+ if (!message.contains(E_DURATION_RANGE_1)
+ && !message.contains(E_DURATION_RANGE_2)
+ && !message.contains(E_DURATION_RANGE_3)) {
+ throw e;
+ }
+ }
/**
* Create the assumed role configuration.
@@ -280,11 +354,11 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
describe("Expect the constructor to fail if the session is to short");
Configuration conf = new Configuration();
conf.set(ASSUMED_ROLE_SESSION_DURATION, "30s");
- interceptClosing(IllegalArgumentException.class, "",
+ Exception ex = interceptClosing(Exception.class, "",
() -> new AssumedRoleCredentialProvider(uri, conf));
+ assertIsRangeException(ex);
}
-
@Test
public void testAssumeRoleCreateFS() throws IOException {
describe("Create an FS client with the role and do some basic IO");
@@ -296,24 +370,32 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
conf.get(ACCESS_KEY), roleARN);
try (FileSystem fs = path.getFileSystem(conf)) {
- fs.getFileStatus(new Path("/"));
+ fs.getFileStatus(ROOT);
fs.mkdirs(path("testAssumeRoleFS"));
}
}
@Test
public void testAssumeRoleRestrictedPolicyFS() throws Exception {
- describe("Restrict the policy for this session; verify that reads fail");
+ describe("Restrict the policy for this session; verify that reads fail.");
+ // there's some special handling of S3Guard here as operations
+ // which only go to DDB don't fail the way S3 would reject them.
Configuration conf = createAssumedRoleConfig();
bindRolePolicy(conf, RESTRICTED_POLICY);
Path path = new Path(getFileSystem().getUri());
+ boolean guarded = getFileSystem().hasMetadataStore();
try (FileSystem fs = path.getFileSystem(conf)) {
- forbidden("getFileStatus",
- () -> fs.getFileStatus(new Path("/")));
- forbidden("getFileStatus",
- () -> fs.listStatus(new Path("/")));
- forbidden("getFileStatus",
+ if (!guarded) {
+ // when S3Guard is enabled, the restricted policy still
+ // permits S3Guard record lookup, so getFileStatus calls
+ // will work iff the record is in the database.
+ forbidden("getFileStatus",
+ () -> fs.getFileStatus(ROOT));
+ }
+ forbidden("",
+ () -> fs.listStatus(ROOT));
+ forbidden("",
() -> fs.mkdirs(path("testAssumeRoleFS")));
}
}
@@ -333,7 +415,11 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
Configuration conf = createAssumedRoleConfig();
bindRolePolicy(conf,
- policy(statement(false, S3_ALL_BUCKETS, S3_GET_OBJECT_TORRENT)));
+ policy(
+ statement(false, S3_ALL_BUCKETS, S3_GET_OBJECT_TORRENT),
+ ALLOW_S3_GET_BUCKET_LOCATION,
+ STATEMENT_S3GUARD_CLIENT,
+ STATEMENT_ALLOW_SSE_KMS_RW));
Path path = path("testAssumeRoleStillIncludesRolePerms");
roleFS = (S3AFileSystem) path.getFileSystem(conf);
assertTouchForbidden(roleFS, path);
@@ -342,6 +428,8 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
/**
* After blocking all write verbs used by S3A, try to write data (fail)
* and read data (succeed).
+ * For S3Guard: full DDB RW access is retained.
+ * SSE-KMS key access is set to decrypt only.
*/
@Test
public void testReadOnlyOperations() throws Throwable {
@@ -352,7 +440,9 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
bindRolePolicy(conf,
policy(
statement(false, S3_ALL_BUCKETS, S3_PATH_WRITE_OPERATIONS),
- STATEMENT_ALL_S3, STATEMENT_ALL_DDB));
+ STATEMENT_ALL_S3,
+ STATEMENT_S3GUARD_CLIENT,
+ STATEMENT_ALLOW_SSE_KMS_READ));
Path path = methodPath();
roleFS = (S3AFileSystem) path.getFileSystem(conf);
// list the root path, expect happy
@@ -399,8 +489,9 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
Configuration conf = createAssumedRoleConfig();
bindRolePolicyStatements(conf,
- STATEMENT_ALL_DDB,
+ STATEMENT_S3GUARD_CLIENT,
statement(true, S3_ALL_BUCKETS, S3_ROOT_READ_OPERATIONS),
+ STATEMENT_ALLOW_SSE_KMS_RW,
new Statement(Effects.Allow)
.addActions(S3_ALL_OPERATIONS)
.addResources(directory(restrictedDir)));
@@ -447,7 +538,7 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
}
/**
- * Execute a sequence of rename operations.
+ * Execute a sequence of rename operations with access locked down.
* @param conf FS configuration
*/
public void executeRestrictedRename(final Configuration conf)
@@ -461,7 +552,8 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
fs.delete(basePath, true);
bindRolePolicyStatements(conf,
- STATEMENT_ALL_DDB,
+ STATEMENT_S3GUARD_CLIENT,
+ STATEMENT_ALLOW_SSE_KMS_RW,
statement(true, S3_ALL_BUCKETS, S3_ROOT_READ_OPERATIONS),
new Statement(Effects.Allow)
.addActions(S3_PATH_RW_OPERATIONS)
@@ -503,6 +595,25 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
}
/**
+ * Without simulation of STS failures, and with STS overload likely to
+ * be very rare, there'll be no implicit test coverage of
+ * {@link AssumedRoleCredentialProvider#operationRetried(String, Exception, int, boolean)}.
+ * This test simply invokes the callback for both the first and second retry event.
+ *
+ * If the handler ever adds more than logging, this test ensures that things
+ * don't break.
+ */
+ @Test
+ public void testAssumedRoleRetryHandler() throws Throwable {
+ try(AssumedRoleCredentialProvider provider
+ = new AssumedRoleCredentialProvider(getFileSystem().getUri(),
+ createAssumedRoleConfig())) {
+ provider.operationRetried("retry", new IOException("failure"), 0, true);
+ provider.operationRetried("retry", new IOException("failure"), 1, true);
+ }
+ }
+
+ /**
* Execute a sequence of rename operations where the source
* data is read only to the client calling rename().
* This will cause the inner delete() operations to fail, whose outcomes
@@ -534,7 +645,7 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
touch(fs, readOnlyFile);
bindRolePolicyStatements(conf,
- STATEMENT_ALL_DDB,
+ STATEMENT_S3GUARD_CLIENT,
statement(true, S3_ALL_BUCKETS, S3_ROOT_READ_OPERATIONS),
new Statement(Effects.Allow)
.addActions(S3_PATH_RW_OPERATIONS)
@@ -614,7 +725,8 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
fs.mkdirs(readOnlyDir);
bindRolePolicyStatements(conf,
- STATEMENT_ALL_DDB,
+ STATEMENT_S3GUARD_CLIENT,
+ STATEMENT_ALLOW_SSE_KMS_RW,
statement(true, S3_ALL_BUCKETS, S3_ROOT_READ_OPERATIONS),
new Statement(Effects.Allow)
.addActions(S3_PATH_RW_OPERATIONS)
@@ -752,7 +864,8 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
fs.delete(destDir, true);
bindRolePolicyStatements(conf,
- STATEMENT_ALL_DDB,
+ STATEMENT_S3GUARD_CLIENT,
+ STATEMENT_ALLOW_SSE_KMS_RW,
statement(true, S3_ALL_BUCKETS, S3_ALL_OPERATIONS),
new Statement(Effects.Deny)
.addActions(S3_PATH_WRITE_OPERATIONS)
http://git-wip-us.apache.org/repos/asf/hadoop/blob/da9a39ee/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumedRoleCommitOperations.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumedRoleCommitOperations.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumedRoleCommitOperations.java
index bb66268..834826e 100644
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumedRoleCommitOperations.java
+++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumedRoleCommitOperations.java
@@ -72,7 +72,8 @@ public class ITestAssumedRoleCommitOperations extends ITestCommitOperations {
Configuration conf = newAssumedRoleConfig(getConfiguration(),
getAssumedRoleARN());
bindRolePolicyStatements(conf,
- STATEMENT_ALL_DDB,
+ STATEMENT_S3GUARD_CLIENT,
+ STATEMENT_ALLOW_SSE_KMS_RW,
statement(true, S3_ALL_BUCKETS, S3_ROOT_READ_OPERATIONS),
new RoleModel.Statement(RoleModel.Effects.Allow)
.addActions(S3_PATH_RW_OPERATIONS)
@@ -81,7 +82,6 @@ public class ITestAssumedRoleCommitOperations extends ITestCommitOperations {
roleFS = (S3AFileSystem) restrictedDir.getFileSystem(conf);
}
-
@Override
public void teardown() throws Exception {
S3AUtils.closeAll(LOG, roleFS);
@@ -122,7 +122,6 @@ public class ITestAssumedRoleCommitOperations extends ITestCommitOperations {
return new Path(restrictedDir, filepath);
}
-
private String getAssumedRoleARN() {
return getContract().getConf().getTrimmed(ASSUMED_ROLE_ARN, "");
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/da9a39ee/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/RoleTestUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/RoleTestUtils.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/RoleTestUtils.java
index 9fa2600..854e7ec 100644
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/RoleTestUtils.java
+++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/RoleTestUtils.java
@@ -58,14 +58,23 @@ public final class RoleTestUtils {
/** Deny GET requests to all buckets. */
- public static final Statement DENY_GET_ALL =
+ public static final Statement DENY_S3_GET_OBJECT =
statement(false, S3_ALL_BUCKETS, S3_GET_OBJECT);
+ public static final Statement ALLOW_S3_GET_BUCKET_LOCATION
+ = statement(true, S3_ALL_BUCKETS, S3_GET_BUCKET_LOCATION);
+
/**
- * This is AWS policy removes read access.
+ * This is AWS policy removes read access from S3, leaves S3Guard access up.
+ * This will allow clients to use S3Guard list/HEAD operations, even
+ * the ability to write records, but not actually access the underlying
+ * data.
+ * The client does need {@link RolePolicies#S3_GET_BUCKET_LOCATION} to
+ * get the bucket location.
*/
- public static final Policy RESTRICTED_POLICY = policy(DENY_GET_ALL);
-
+ public static final Policy RESTRICTED_POLICY = policy(
+ DENY_S3_GET_OBJECT, STATEMENT_ALL_DDB, ALLOW_S3_GET_BUCKET_LOCATION
+ );
/**
* Error message to get from the AWS SDK if you can't assume the role.
@@ -145,7 +154,7 @@ public final class RoleTestUtils {
Configuration conf = new Configuration(srcConf);
conf.set(AWS_CREDENTIALS_PROVIDER, AssumedRoleCredentialProvider.NAME);
conf.set(ASSUMED_ROLE_ARN, roleARN);
- conf.set(ASSUMED_ROLE_SESSION_NAME, "valid");
+ conf.set(ASSUMED_ROLE_SESSION_NAME, "test");
conf.set(ASSUMED_ROLE_SESSION_DURATION, "15m");
disableFilesystemCaching(conf);
return conf;
@@ -163,9 +172,8 @@ public final class RoleTestUtils {
String contained,
Callable<T> eval)
throws Exception {
- AccessDeniedException ex = intercept(AccessDeniedException.class, eval);
- GenericTestUtils.assertExceptionContains(contained, ex);
- return ex;
+ return intercept(AccessDeniedException.class,
+ contained, eval);
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/da9a39ee/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
index f591e32..9185fc5 100644
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
+++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
@@ -32,6 +32,7 @@ import java.util.Set;
import java.util.concurrent.Callable;
import java.util.concurrent.TimeUnit;
+import org.apache.hadoop.fs.s3a.S3AUtils;
import org.apache.hadoop.util.StopWatch;
import com.google.common.base.Preconditions;
import org.apache.hadoop.fs.FileSystem;
@@ -51,6 +52,7 @@ import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.util.ExitUtil;
import org.apache.hadoop.util.StringUtils;
+import static org.apache.hadoop.fs.s3a.Constants.METADATASTORE_AUTHORITATIVE;
import static org.apache.hadoop.fs.s3a.Constants.S3GUARD_DDB_TABLE_NAME_KEY;
import static org.apache.hadoop.fs.s3a.Constants.S3GUARD_METASTORE_NULL;
import static org.apache.hadoop.fs.s3a.Constants.S3_METADATA_STORE_IMPL;
@@ -144,8 +146,11 @@ public abstract class AbstractS3GuardToolTestBase extends AbstractS3ATestBase {
// Also create a "raw" fs without any MetadataStore configured
Configuration conf = new Configuration(getConfiguration());
- conf.set(S3_METADATA_STORE_IMPL, S3GUARD_METASTORE_NULL);
URI fsUri = getFileSystem().getUri();
+ conf.set(S3_METADATA_STORE_IMPL, S3GUARD_METASTORE_NULL);
+ S3AUtils.setBucketOption(conf,fsUri.getHost(),
+ METADATASTORE_AUTHORITATIVE,
+ S3GUARD_METASTORE_NULL);
rawFs = (S3AFileSystem) FileSystem.newInstance(fsUri, conf);
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/da9a39ee/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardConcurrentOps.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardConcurrentOps.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardConcurrentOps.java
index c6838a0..22a1efd 100644
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardConcurrentOps.java
+++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardConcurrentOps.java
@@ -40,8 +40,10 @@ import org.junit.rules.Timeout;
import org.apache.commons.lang3.StringUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.AWSCredentialProviderList;
import org.apache.hadoop.fs.s3a.AbstractS3ATestBase;
import org.apache.hadoop.fs.s3a.Constants;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
import static org.apache.hadoop.fs.s3a.Constants.S3GUARD_DDB_REGION_KEY;
@@ -80,81 +82,102 @@ public class ITestS3GuardConcurrentOps extends AbstractS3ATestBase {
@Test
public void testConcurrentTableCreations() throws Exception {
- final Configuration conf = getConfiguration();
+ S3AFileSystem fs = getFileSystem();
+ final Configuration conf = fs.getConf();
Assume.assumeTrue("Test only applies when DynamoDB is used for S3Guard",
conf.get(Constants.S3_METADATA_STORE_IMPL).equals(
Constants.S3GUARD_METASTORE_DYNAMO));
- DynamoDBMetadataStore ms = new DynamoDBMetadataStore();
- ms.initialize(getFileSystem());
- DynamoDB db = ms.getDynamoDB();
-
- String tableName = "testConcurrentTableCreations" + new Random().nextInt();
- conf.setBoolean(Constants.S3GUARD_DDB_TABLE_CREATE_KEY, true);
- conf.set(Constants.S3GUARD_DDB_TABLE_NAME_KEY, tableName);
+ AWSCredentialProviderList sharedCreds =
+ fs.shareCredentials("testConcurrentTableCreations");
+ // close that shared copy.
+ sharedCreds.close();
+ // this is the original reference count.
+ int originalRefCount = sharedCreds.getRefCount();
- String region = conf.getTrimmed(S3GUARD_DDB_REGION_KEY);
- if (StringUtils.isEmpty(region)) {
- // no region set, so pick it up from the test bucket
- conf.set(S3GUARD_DDB_REGION_KEY, getFileSystem().getBucketLocation());
- }
- int concurrentOps = 16;
- int iterations = 4;
+ //now init the store; this should increment the ref count.
+ DynamoDBMetadataStore ms = new DynamoDBMetadataStore();
+ ms.initialize(fs);
- failIfTableExists(db, tableName);
+ // the ref count should have gone up
+ assertEquals("Credential Ref count unchanged after initializing metastore "
+ + sharedCreds,
+ originalRefCount + 1, sharedCreds.getRefCount());
+ try {
+ DynamoDB db = ms.getDynamoDB();
- for (int i = 0; i < iterations; i++) {
- ExecutorService executor = Executors.newFixedThreadPool(
- concurrentOps, new ThreadFactory() {
- private AtomicInteger count = new AtomicInteger(0);
+ String tableName = "testConcurrentTableCreations" + new Random().nextInt();
+ conf.setBoolean(Constants.S3GUARD_DDB_TABLE_CREATE_KEY, true);
+ conf.set(Constants.S3GUARD_DDB_TABLE_NAME_KEY, tableName);
- public Thread newThread(Runnable r) {
- return new Thread(r,
- "testConcurrentTableCreations" + count.getAndIncrement());
+ String region = conf.getTrimmed(S3GUARD_DDB_REGION_KEY);
+ if (StringUtils.isEmpty(region)) {
+ // no region set, so pick it up from the test bucket
+ conf.set(S3GUARD_DDB_REGION_KEY, fs.getBucketLocation());
+ }
+ int concurrentOps = 16;
+ int iterations = 4;
+
+ failIfTableExists(db, tableName);
+
+ for (int i = 0; i < iterations; i++) {
+ ExecutorService executor = Executors.newFixedThreadPool(
+ concurrentOps, new ThreadFactory() {
+ private AtomicInteger count = new AtomicInteger(0);
+
+ public Thread newThread(Runnable r) {
+ return new Thread(r,
+ "testConcurrentTableCreations" + count.getAndIncrement());
+ }
+ });
+ ((ThreadPoolExecutor) executor).prestartAllCoreThreads();
+ Future<Exception>[] futures = new Future[concurrentOps];
+ for (int f = 0; f < concurrentOps; f++) {
+ final int index = f;
+ futures[f] = executor.submit(new Callable<Exception>() {
+ @Override
+ public Exception call() throws Exception {
+
+ ContractTestUtils.NanoTimer timer =
+ new ContractTestUtils.NanoTimer();
+
+ Exception result = null;
+ try (DynamoDBMetadataStore store = new DynamoDBMetadataStore()) {
+ store.initialize(conf);
+ } catch (Exception e) {
+ LOG.error(e.getClass() + ": " + e.getMessage());
+ result = e;
+ }
+
+ timer.end("Parallel DynamoDB client creation %d", index);
+ LOG.info("Parallel DynamoDB client creation {} ran from {} to {}",
+ index, timer.getStartTime(), timer.getEndTime());
+ return result;
}
});
- ((ThreadPoolExecutor) executor).prestartAllCoreThreads();
- Future<Exception>[] futures = new Future[concurrentOps];
- for (int f = 0; f < concurrentOps; f++) {
- final int index = f;
- futures[f] = executor.submit(new Callable<Exception>() {
- @Override
- public Exception call() throws Exception {
-
- ContractTestUtils.NanoTimer timer =
- new ContractTestUtils.NanoTimer();
-
- Exception result = null;
- try (DynamoDBMetadataStore store = new DynamoDBMetadataStore()) {
- store.initialize(conf);
- } catch (Exception e) {
- LOG.error(e.getClass() + ": " + e.getMessage());
- result = e;
- }
-
- timer.end("Parallel DynamoDB client creation %d", index);
- LOG.info("Parallel DynamoDB client creation {} ran from {} to {}",
- index, timer.getStartTime(), timer.getEndTime());
- return result;
+ }
+ List<Exception> exceptions = new ArrayList<>(concurrentOps);
+ for (int f = 0; f < concurrentOps; f++) {
+ Exception outcome = futures[f].get();
+ if (outcome != null) {
+ exceptions.add(outcome);
}
- });
- }
- List<Exception> exceptions = new ArrayList<>(concurrentOps);
- for (int f = 0; f < concurrentOps; f++) {
- Exception outcome = futures[f].get();
- if (outcome != null) {
- exceptions.add(outcome);
+ }
+ deleteTable(db, tableName);
+ int exceptionsThrown = exceptions.size();
+ if (exceptionsThrown > 0) {
+ // at least one exception was thrown. Fail the test & nest the first
+ // exception caught
+ throw new AssertionError(exceptionsThrown + "/" + concurrentOps +
+ " threads threw exceptions while initializing on iteration " + i,
+ exceptions.get(0));
}
}
- deleteTable(db, tableName);
- int exceptionsThrown = exceptions.size();
- if (exceptionsThrown > 0) {
- // at least one exception was thrown. Fail the test & nest the first
- // exception caught
- throw new AssertionError(exceptionsThrown + "/" + concurrentOps +
- " threads threw exceptions while initializing on iteration " + i,
- exceptions.get(0));
- }
+ } finally {
+ ms.close();
}
+ assertEquals("Credential Ref count unchanged after closing metastore: "
+ + sharedCreds,
+ originalRefCount, sharedCreds.getRefCount());
}
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org