You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by ae...@apache.org on 2017/09/18 22:11:50 UTC

[07/50] [abbrv] hadoop git commit: HADOOP-14738 Remove S3N and obsolete bits of S3A; rework docs. Contributed by Steve Loughran.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md
index fe67d69..7957122 100644
--- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md
+++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md
@@ -107,7 +107,6 @@ It is recommended that you leave the default setting here:
     <name>fs.s3a.metadatastore.authoritative</name>
     <value>false</value>
 </property>
-
 ```
 
 Setting this to `true` is currently an experimental feature.  When true, the
@@ -510,7 +509,6 @@ log4j.logger.com.amazonaws.services.dynamodbv2.AmazonDynamoDB
 # Log all HTTP requests made; includes S3 interaction. This may
 # include sensitive information such as account IDs in HTTP headers.
 log4j.logger.com.amazonaws.request=DEBUG
-
 ```
 
 If all else fails, S3Guard is designed to allow for easy recovery by deleting
@@ -538,7 +536,7 @@ S3Guard tables are created with a version marker, an entry with the primary
 key and child entry of `../VERSION`; the use of a relative path guarantees
 that it will not be resolved.
 
-#### Versioning policy.
+*Versioning policy*
 
 1. The version number of an S3Guard table will only be incremented when
 an incompatible change is made to the table structure —that is, the structure
@@ -557,7 +555,7 @@ in an incompatible manner. The version marker in tables exists to support
 such an option if it ever becomes necessary, by ensuring that all S3Guard
 client can recognise any version mismatch.
 
-### Security
+## Security
 
 All users of the DynamoDB table must have write access to it. This
 effectively means they must have write access to the entire object store.
@@ -569,9 +567,9 @@ are only made after successful file creation, deletion and rename, the
 store is *unlikely* to get out of sync, it is still something which
 merits more testing before it could be considered reliable.
 
-### Troubleshooting
+## Troubleshooting
 
-#### Error: `S3Guard table lacks version marker.`
+### Error: `S3Guard table lacks version marker.`
 
 The table which was intended to be used as a S3guard metadata store
 does not have any version marker indicating that it is a S3Guard table.
@@ -581,7 +579,7 @@ It may be that this is not a S3Guard table.
 * Make sure that this is the correct table name.
 * Delete the table, so it can be rebuilt.
 
-#### Error: `Database table is from an incompatible S3Guard version`
+### Error: `Database table is from an incompatible S3Guard version`
 
 This indicates that the version of S3Guard which created (or possibly updated)
 the database table is from a different version that that expected by the S3A
@@ -596,7 +594,7 @@ bucket. Upgrade the application/library.
 If the expected version is higher than the actual version, then the table
 itself will need upgrading.
 
-#### Error `"DynamoDB table TABLE does not exist in region REGION; auto-creation is turned off"`
+### Error `"DynamoDB table TABLE does not exist in region REGION; auto-creation is turned off"`
 
 S3Guard could not find the DynamoDB table for the Metadata Store,
 and it was not configured to create it. Either the table was missing,
@@ -608,3 +606,8 @@ or the configuration is preventing S3Guard from finding the table.
 1. If the region is not set, verify that the table exists in the same
 region as the bucket being used.
 1. Create the table if necessary.
+
+
+## Other Topis
+
+For details on how to test S3Guard, see [Testing S3Guard](./testing.html#s3guard)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3n.md
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3n.md b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3n.md
new file mode 100644
index 0000000..9b59ad1
--- /dev/null
+++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3n.md
@@ -0,0 +1,52 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+# The S3N Client
+
+<!-- MACRO{toc|fromDepth=0|toDepth=5} -->
+
+S3N was a Hadoop filesystem client which can read or write data stored
+in Amazon S3. It uses URLs with the schema `s3n://`.
+
+- - -
+
+**Hadoop's S3N client for Amazon S3 has been superceded by
+the S3A connector**
+
+**Please upgrade to S3A for a supported, higher-performance S3 Client**
+
+- - -
+
+
+## <a name="migrating"></a> How to migrate to to the S3A client
+
+1. Keep the `hadoop-aws` JAR on your classpath.
+
+1. Add the `aws-java-sdk-bundle.jar` JAR which Hadoop ships
+with to your classpath.
+
+1. Change the authentication keys
+
+    | old key | new key |
+    |---------|---------|
+    | `fs.s3n.awsAccessKeyId` | `fs.s3a.access.key` |
+    | `fs.s3n.awsSecretAccessKey` | `fs.s3a.secret.key` |
+
+    Do make sure the property names are correct. For S3A, they are
+    `fs.s3a.access.key` and `fs.s3a.secret.key` —you cannot just copy the S3N
+    properties and replace `s3n` with `s3a`.
+
+1. Replace URLs which began with `s3n://` with `s3a://`
+
+1. You may now remove the `jets3t` JAR, as it is no longer needed.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
index 19d322d..cf7a2e4 100644
--- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
+++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
@@ -12,9 +12,9 @@
   limitations under the License. See accompanying LICENSE file.
 -->
 
-# Testing the S3 filesystem clients
+# Testing the S3A filesystem client and its features, including S3Guard
 
-<!-- MACRO{toc|fromDepth=0|toDepth=5} -->
+<!-- MACRO{toc|fromDepth=0|toDepth=3} -->
 
 This module includes both unit tests, which can run in isolation without
 connecting to the S3 service, and integration tests, which require a working
@@ -26,7 +26,7 @@ Due to eventual consistency, integration tests may fail without reason.
 Transient failures, which no longer occur upon rerunning the test, should thus
 be ignored.
 
-## Policy for submitting patches which affect the `hadoop-aws` module.
+## <a name="policy"></a> Policy for submitting patches which affect the `hadoop-aws` module.
 
 The Apache Jenkins infrastucture does not run any S3 integration tests,
 due to the need to keep credentials secure.
@@ -74,7 +74,7 @@ in the production code, that could be a sign of a problem which may surface
 over long-haul connections. Please help us identify and fix these problems
 &mdash; especially as you are the one best placed to verify the fixes work.
 
-## Setting up the tests
+## <a name="setting-up"></a> Setting up the tests
 
 To integration test the S3* filesystem clients, you need to provide
 `auth-keys.xml` which passes in authentication details to the test runner.
@@ -101,16 +101,11 @@ The XML file must contain all the ID/key information needed to connect
 each of the filesystem clients to the object stores, and a URL for
 each filesystem for its testing.
 
-1. `test.fs.s3n.name` : the URL of the bucket for S3n tests
 1. `test.fs.s3a.name` : the URL of the bucket for S3a tests
-1. `fs.contract.test.fs.s3n` : the URL of the bucket for S3n filesystem contract tests
 1. `fs.contract.test.fs.s3a` : the URL of the bucket for S3a filesystem contract tests
 
-*Note* that running s3a and s3n tests in parallel mode, against the same bucket
-is unreliable.  We recommend using separate buckets or testing one connector
-at a time.
 
-The contents of each bucket will be destroyed during the test process:
+The contents of the bucket will be destroyed during the test process:
 do not use the bucket for any purpose other than testing. Furthermore, for
 s3a, all in-progress multi-part uploads to the bucket will be aborted at the
 start of a test (by forcing `fs.s3a.multipart.purge=true`) to clean up the
@@ -122,26 +117,6 @@ Example:
 <configuration>
 
   <property>
-    <name>test.fs.s3n.name</name>
-    <value>s3n://test-aws-s3n/</value>
-  </property>
-
-  <property>
-    <name>fs.contract.test.fs.s3n</name>
-    <value>${test.fs.s3n.name}</value>
-  </property>
-
-  <property>
-    <name>fs.s3n.awsAccessKeyId</name>
-    <value>DONOTPCOMMITTHISKEYTOSCM</value>
-  </property>
-
-  <property>
-    <name>fs.s3n.awsSecretAccessKey</name>
-    <value>DONOTEVERSHARETHISSECRETKEY!</value>
-  </property>
-
-  <property>
     <name>test.fs.s3a.name</name>
     <value>s3a://test-aws-s3a/</value>
   </property>
@@ -172,7 +147,7 @@ Example:
 </configuration>
 ```
 
-### Configuring S3a Encryption
+### <a name="encryption"></a> Configuring S3a Encryption
 
 For S3a encryption tests to run correctly, the
 `fs.s3a.server-side-encryption.key` must be configured in the s3a contract xml
@@ -192,7 +167,7 @@ You can also force all the tests to run with a specific SSE encryption method
 by configuring the property `fs.s3a.server-side-encryption-algorithm` in the s3a
 contract file.
 
-## Running the Tests
+## <a name="running"></a> Running the Tests
 
 After completing the configuration, execute the test run through Maven.
 
@@ -251,7 +226,7 @@ combination with `test` or `it.test`.  If you know that you are specifying only
 tests that can run safely in parallel, then it will work.  For wide patterns,
 like `ITestS3A*` shown above, it may cause unpredictable test failures.
 
-### Testing against different regions
+### <a name="regions"></a> Testing against different regions
 
 S3A can connect to different regions —the tests support this. Simply
 define the target region in `auth-keys.xml`.
@@ -265,7 +240,7 @@ define the target region in `auth-keys.xml`.
 This is used for all tests expect for scale tests using a Public CSV.gz file
 (see below)
 
-### CSV Data source Tests
+### <a name="csv"></a> CSV Data Tests
 
 The `TestS3AInputStreamPerformance` tests require read access to a multi-MB
 text file. The default file for these tests is one published by amazon,
@@ -303,7 +278,7 @@ For the default test dataset, hosted in the `landsat-pds` bucket, this is:
 </property>
 ```
 
-### Viewing Integration Test Reports
+## <a name="reporting"></a> Viewing Integration Test Reports
 
 
 Integration test results and logs are stored in `target/failsafe-reports/`.
@@ -313,7 +288,7 @@ plugin:
 ```bash
 mvn surefire-report:failsafe-report-only
 ```
-### Scale Tests
+## <a name="scale"></a> Scale Tests
 
 There are a set of tests designed to measure the scalability and performance
 at scale of the S3A tests, *Scale Tests*. Tests include: creating
@@ -325,7 +300,7 @@ By their very nature they are slow. And, as their execution time is often
 limited by bandwidth between the computer running the tests and the S3 endpoint,
 parallel execution does not speed these tests up.
 
-#### Enabling the Scale Tests
+### <a name="enabling-scale"></a> Enabling the Scale Tests
 
 The tests are enabled if the `scale` property is set in the maven build
 this can be done regardless of whether or not the parallel test profile
@@ -342,7 +317,7 @@ sequentially; those which are slow due to HTTPS setup costs or server-side
 actionsare included in the set of parallelized tests.
 
 
-#### Maven build tuning options
+### <a name="tuning_scale"></a> Tuning scale optins from Maven
 
 
 Some of the tests can be tuned from the maven build or from the
@@ -373,7 +348,7 @@ Only a few properties can be set this way; more will be added.
 The file and partition sizes are numeric values with a k/m/g/t/p suffix depending
 on the desired size. For example: 128M, 128m, 2G, 2G, 4T or even 1P.
 
-#### Scale test configuration options
+### <a name="scale-config"></a> Scale test configuration options
 
 Some scale tests perform multiple operations (such as creating many directories).
 
@@ -418,7 +393,7 @@ smaller to achieve faster test runs.
 
 S3A specific scale test properties are
 
-##### `fs.s3a.scale.test.huge.filesize`: size in MB for "Huge file tests".
+*`fs.s3a.scale.test.huge.filesize`: size in MB for "Huge file tests".*
 
 The Huge File tests validate S3A's ability to handle large files —the property
 `fs.s3a.scale.test.huge.filesize` declares the file size to use.
@@ -452,13 +427,11 @@ Otherwise, set a large timeout in `fs.s3a.scale.test.timeout`
 </property>
 ```
 
-
 The tests are executed in an order to only clean up created files after
 the end of all the tests. If the tests are interrupted, the test data will remain.
 
 
-
-## Testing against non AWS S3 endpoints.
+## <a name="alternate_s3"></a> Testing against non AWS S3 endpoints.
 
 The S3A filesystem is designed to work with storage endpoints which implement
 the S3 protocols to the extent that the amazon S3 SDK is capable of talking
@@ -527,7 +500,7 @@ An alternate endpoint may be defined in `test.fs.s3a.sts.endpoint`.
 The default is ""; meaning "use the amazon default value".
 
 
-## Debugging Test failures
+## <a name="debugging"></a> Debugging Test failures
 
 Logging at debug level is the standard way to provide more diagnostics output;
 after setting this rerun the tests
@@ -550,7 +523,7 @@ setting the `fs.s3a.user.agent.prefix` to a unique prefix for a specific
 test run, which will enable the specific log entries to be more easily
 located.
 
-## Adding new tests
+## <a name="new_tests"></a> Adding new tests
 
 New tests are always welcome. Bear in mind that we need to keep costs
 and test time down, which is done by
@@ -593,7 +566,7 @@ fail with meaningful diagnostics, so any new problems can be easily debugged
 from test logs.
 
 
-### Requirements of new Tests
+## <a name="requirements"></a> Requirements of new Tests
 
 
 This is what we expect from new tests; they're an extension of the normal
@@ -602,7 +575,7 @@ use requires the presence of secret credentials, where tests may be slow,
 and where finding out why something failed from nothing but the test output
 is critical.
 
-#### Subclasses Existing Shared Base Classes
+### Subclasses Existing Shared Base Classes
 
 Extend `AbstractS3ATestBase` or `AbstractSTestS3AHugeFiles` unless justifiable.
 These set things up for testing against the object stores, provide good threadnames,
@@ -619,12 +592,12 @@ defined in `fs.s3a.contract.test`
 Having shared base classes may help reduce future maintenance too. Please
 use them/
 
-#### Secure
+### Secure
 
 Don't ever log credentials. The credential tests go out of their way to
 not provide meaningful logs or assertion messages precisely to avoid this.
 
-#### Efficient of Time and Money
+### Efficient of Time and Money
 
 This means efficient in test setup/teardown, and, ideally, making use of
 existing public datasets to save setup time and tester cost.
@@ -650,7 +623,7 @@ against other regions, or with third party S3 implementations. Thus the
 URL can be overridden for testing elsewhere.
 
 
-#### Works With Other S3 Endpoints
+### Works With Other S3 Endpoints
 
 Don't assume AWS S3 US-East only, do allow for working with external S3 implementations.
 Those may be behind the latest S3 API features, not support encryption, session
@@ -678,7 +651,7 @@ adds some newlines so as to be easier to spot.
 1. Use `ContractTestUtils.NanoTimer` to measure the duration of operations,
 and log the output.
 
-#### Fails Meaningfully
+### Fails Meaningfully
 
 The `ContractTestUtils` class contains a whole set of assertions for making
 statements about the expected state of a filesystem, e.g.
@@ -705,7 +678,7 @@ get called.
 We really appreciate this &mdash; you will too.
 
 
-## Tips
+## <a name="tips"></a> Tips
 
 ### How to keep your credentials really safe
 
@@ -725,7 +698,7 @@ using an absolute XInclude reference to it.
 </configuration>
 ```
 
-# Failure Injection
+#  <a name="failure-injection"></a>Failure Injection
 
 **Warning do not enable any type of failure injection in production.  The
 following settings are for testing only.**
@@ -858,7 +831,10 @@ The inconsistent client is shipped in the `hadoop-aws` JAR, so it can
 be used in applications which work with S3 to see how they handle
 inconsistent directory listings.
 
-## Testing S3Guard
+##<a name="s3guard"></a> Testing S3Guard
+
+[S3Guard](./s3guard.html) is an extension to S3A which adds consistent metadata
+listings to the S3A client. As it is part of S3A, it also needs to be tested.
 
 The basic strategy for testing S3Guard correctness consists of:
 
@@ -934,13 +910,6 @@ If the `s3guard` profile *is* set,
   overwrite any previously set in the configuration files.
 1. DynamoDB will be configured to create any missing tables.
 
-### Warning About Concurrent Tests
-
-You must not run S3A and S3N tests in parallel on the same bucket.  This is
-especially true when S3Guard is enabled.  S3Guard requires that all clients
-that are modifying the bucket have S3Guard enabled, so having S3N
-integration tests running in parallel with S3A tests will cause strange
-failures.
 
 ### Scale Testing MetadataStore Directly
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md
index d79720e..619ffc1 100644
--- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md
+++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md
@@ -12,27 +12,628 @@
   limitations under the License. See accompanying LICENSE file.
 -->
 
-# Troubleshooting S3A
+# Troubleshooting
 
-Here are some lower level details and hints on troubleshooting and tuning
-the S3A client.
+<!-- MACRO{toc|fromDepth=0|toDepth=5} -->
 
-## Logging at lower levels
+##<a name="introduction"></a>  Introduction
 
-The AWS SDK and the Apache HTTP components can be configured to log at
-more detail, as can S3A itself.
+Common problems working with S3 are
+
+1. Classpath setup
+1. Authentication
+1. S3 Inconsistency side-effects
+
+Classpath is usually the first problem. For the S3x filesystem clients,
+you need the Hadoop-specific filesystem clients, third party S3 client libraries
+compatible with the Hadoop code, and any dependent libraries compatible with
+Hadoop and the specific JVM.
+
+The classpath must be set up for the process talking to S3: if this is code
+running in the Hadoop cluster, the JARs must be on that classpath. That
+includes `distcp` and the `hadoop fs` command.
+
+<!-- MACRO{toc|fromDepth=0|toDepth=2} -->
+
+## <a name="classpath"></a> Classpath Setup
+
+Note that for security reasons, the S3A client does not provide much detail
+on the authentication process (i.e. the secrets used to authenticate).
+
+### `ClassNotFoundException: org.apache.hadoop.fs.s3a.S3AFileSystem`
+
+These is Hadoop filesytem client classes, found in the `hadoop-aws` JAR.
+An exception reporting this class as missing means that this JAR is not on
+the classpath.
+
+### `ClassNotFoundException: com.amazonaws.services.s3.AmazonS3Client`
+
+(or other `com.amazonaws` class.)
+
+This means that the `aws-java-sdk-bundle.jar` JAR is not on the classpath:
+add it.
+
+### Missing method in `com.amazonaws` class
+
+This can be triggered by incompatibilities between the AWS SDK on the classpath
+and the version which Hadoop was compiled with.
+
+The AWS SDK JARs change their signature enough between releases that the only
+way to safely update the AWS SDK version is to recompile Hadoop against the later
+version.
+
+The sole fix is to use the same version of the AWS SDK with which Hadoop
+was built.
+
+
+## <a name="authentication"></a> Authentication Failure
+
+If Hadoop cannot authenticate with the S3 service endpoint,
+the client retries a number of times before eventually failing.
+When it finally gives up, it will report a message about signature mismatch:
+
+```
+com.amazonaws.services.s3.model.AmazonS3Exception:
+ The request signature we calculated does not match the signature you provided.
+ Check your key and signing method.
+  (Service: Amazon S3; Status Code: 403; Error Code: SignatureDoesNotMatch,
+```
+
+The likely cause is that you either have the wrong credentials or somehow
+the credentials were not readable on the host attempting to read or write
+the S3 Bucket.
+
+Enabling debug logging for the package `org.apache.hadoop.fs.s3a`
+can help provide more information.
+
+The most common cause is that you have the wrong credentials for any of the current
+authentication mechanism(s) —or somehow
+the credentials were not readable on the host attempting to read or write
+the S3 Bucket. However, there are a couple of system configuration problems
+(JVM version, system clock) which also need to be checked.
+
+Most common: there's an error in the configuration properties.
+
+1. Make sure that the name of the bucket is the correct one.
+That is: check the URL.
+
+1. If using a private S3 server, make sure endpoint in `fs.s3a.endpoint` has
+been set to this server -and that the client is not accidentally trying to
+authenticate with the public Amazon S3 service.
+
+1. Make sure the property names are correct. For S3A, they are
+`fs.s3a.access.key` and `fs.s3a.secret.key` —you cannot just copy the S3N
+properties and replace `s3n` with `s3a`.
+
+1. Make sure the properties are visible to the process attempting to
+talk to the object store. Placing them in `core-site.xml` is the standard
+mechanism.
+
+1. If using session authentication, the session may have expired.
+Generate a new session token and secret.
+
+1. If using environement variable-based authentication, make sure that the
+relevant variables are set in the environment in which the process is running.
+
+The standard first step is: try to use the AWS command line tools with the same
+credentials, through a command such as:
+
+    hadoop fs -ls s3a://my-bucket/
+
+Note the trailing "/" here; without that the shell thinks you are trying to list
+your home directory under the bucket, which will only exist if explicitly created.
+
+
+Attempting to list a bucket using inline credentials is a
+means of verifying that the key and secret can access a bucket;
+
+    hadoop fs -ls s3a://key:secret@my-bucket/
+
+Do escape any `+` or `/` symbols in the secret, as discussed below, and never
+share the URL, logs generated using it, or use such an inline authentication
+mechanism in production.
+
+Finally, if you set the environment variables, you can take advantage of S3A's
+support of environment-variable authentication by attempting the same ls operation.
+That is: unset the `fs.s3a` secrets and rely on the environment variables.
+
+### Authentication failure due to clock skew
+
+The timestamp is used in signing to S3, so as to
+defend against replay attacks. If the system clock is too far behind *or ahead*
+of Amazon's, requests will be rejected.
+
+This can surface as the situation where
+read requests are allowed, but operations which write to the bucket are denied.
+
+Check the system clock.
+
+### Authentication failure when using URLs with embedded secrets
+
+If using the (strongly discouraged) mechanism of including the
+AWS Key and secret in a URL, then both "+" and "/" symbols need
+to encoded in the URL. As many AWS secrets include these characters,
+encoding problems are not uncommon.
+
+| symbol | encoded  value|
+|-----------|-------------|
+| `+` | `%2B` |
+| `/` | `%2F` |
+
+
+As an example, a URL for `bucket` with AWS ID `user1` and secret `a+b/c` would
+be represented as
 
-```properties
-log4j.logger.org.apache.hadoop.fs.s3a=DEBUG
-log4j.logger.com.amazonaws.request=DEBUG
-log4j.logger.org.apache.http=DEBUG
-log4j.logger.org.apache.http.wire=ERROR
 ```
+s3a://user1:a%2Bb%2Fc@bucket/
+```
+
+This technique is only needed when placing secrets in the URL. Again,
+this is something users are strongly advised against using.
+
+### <a name="bad_request"></a> "Bad Request" exception when working with AWS S3 Frankfurt, Seoul, or other "V4" endpoint
+
+
+S3 Frankfurt and Seoul *only* support
+[the V4 authentication API](http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html).
+
+Requests using the V2 API will be rejected with 400 `Bad Request`
+
+```
+$ bin/hadoop fs -ls s3a://frankfurt/
+WARN s3a.S3AFileSystem: Client: Amazon S3 error 400: 400 Bad Request; Bad Request (retryable)
+
+com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 923C5D9E75E44C06), S3 Extended Request ID: HDwje6k+ANEeDsM6aJ8+D5gUmNAMguOk2BvZ8PH3g9z0gpH+IuwT7N19oQOnIr5CIx7Vqb/uThE=
+    at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
+    at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
+    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
+    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
+    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
+    at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1107)
+    at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1070)
+    at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:307)
+    at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:284)
+    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2793)
+    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:101)
+    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2830)
+    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2812)
+    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
+    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
+    at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
+    at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
+    at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
+    at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
+    at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
+    at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
+    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
+    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
+    at org.apache.hadoop.fs.FsShell.main(FsShell.java:373)
+ls: doesBucketExist on frankfurt-new: com.amazonaws.services.s3.model.AmazonS3Exception:
+  Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request;
+```
+
+This happens when trying to work with any S3 service which only supports the
+"V4" signing API —but the client is configured to use the default S3 service
+endpoint.
+
+The S3A client needs to be given the endpoint to use via the `fs.s3a.endpoint`
+property.
+
+As an example, the endpoint for S3 Frankfurt is `s3.eu-central-1.amazonaws.com`:
+
+```xml
+<property>
+  <name>fs.s3a.endpoint</name>
+  <value>s3.eu-central-1.amazonaws.com</value>
+</property>
+```
+
+## <a name="connectivity"></a> Connectivity Problems
+
+### <a name="bad_endpoint"></a> Error message "The bucket you are attempting to access must be addressed using the specified endpoint"
+
+This surfaces when `fs.s3a.endpoint` is configured to use an S3 service endpoint
+which is neither the original AWS one, `s3.amazonaws.com` , nor the one where
+the bucket is hosted.  The error message contains the redirect target returned
+by S3, which can be used to determine the correct value for `fs.s3a.endpoint`.
+
+```
+org.apache.hadoop.fs.s3a.AWSS3IOException: Received permanent redirect response
+  to bucket.s3-us-west-2.amazonaws.com.  This likely indicates that the S3
+  endpoint configured in fs.s3a.endpoint does not match the AWS region
+  containing the bucket.: The bucket you are attempting to access must be
+  addressed using the specified endpoint. Please send all future requests to
+  this endpoint. (Service: Amazon S3; Status Code: 301;
+  Error Code: PermanentRedirect; Request ID: 7D39EC1021C61B11)
+        at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:132)
+        at org.apache.hadoop.fs.s3a.S3AFileSystem.initMultipartUploads(S3AFileSystem.java:287)
+        at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:203)
+        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2895)
+        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:102)
+        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2932)
+        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2914)
+        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:390)
+```
+
+1. Use the [Specific endpoint of the bucket's S3 service](http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region)
+1. If not using "V4" authentication (see above), the original S3 endpoint
+can be used:
+
+```xml
+<property>
+  <name>fs.s3a.endpoint</name>
+  <value>s3.amazonaws.com</value>
+</property>
+```
+
+Using the explicit endpoint for the region is recommended for speed and
+to use the V4 signing API.
+
+
+### <a name="timeout"></a> "Timeout waiting for connection from pool" when writing data
+
+This happens when using the output stream thread pool runs out of capacity.
+
+```
+[s3a-transfer-shared-pool1-t20] INFO  http.AmazonHttpClient (AmazonHttpClient.java:executeHelper(496)) - Unable to execute HTTP request: Timeout waiting for connection from poolorg.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool
+  at org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(PoolingClientConnectionManager.java:230)
+  at org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(PoolingClientConnectionManager.java:199)
+  at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
+  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
+  at java.lang.reflect.Method.invoke(Method.java:498)
+  at com.amazonaws.http.conn.ClientConnectionRequestFactory$Handler.invoke(ClientConnectionRequestFactory.java:70)
+  at com.amazonaws.http.conn.$Proxy10.getConnection(Unknown Source)
+  at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:424)
+  at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:884)
+  at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
+  at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
+  at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:728)
+  at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
+  at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
+  at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
+  at com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:2921)
+  at com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:2906)
+  at org.apache.hadoop.fs.s3a.S3AFileSystem.uploadPart(S3AFileSystem.java:1025)
+  at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload$1.call(S3ABlockOutputStream.java:360)
+  at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload$1.call(S3ABlockOutputStream.java:355)
+  at org.apache.hadoop.fs.s3a.BlockingThreadPoolExecutorService$CallableWithPermitRelease.call(BlockingThreadPoolExecutorService.java:239)
+  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
+  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
+  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
+  at java.lang.Thread.run(Thread.java:745)
+```
+
+Make sure that `fs.s3a.connection.maximum` is at least larger
+than `fs.s3a.threads.max`.
+
+```xml
+<property>
+  <name>fs.s3a.threads.max</name>
+  <value>20</value>
+</property>
+
+<property>
+  <name>fs.s3a.connection.maximum</name>
+  <value>30</value>
+</property>
+```
+
+### "Timeout waiting for connection from pool" when reading data
+
+This happens when more threads are trying to read from an S3A system than
+the maximum number of allocated HTTP connections.
+
+Set `fs.s3a.connection.maximum` to a larger value (and at least as large as
+`fs.s3a.threads.max`)
+
+### Out of heap memory when writing with via Fast Upload
+
+This can happen when using the upload buffering mechanism
+uses memory (either `fs.s3a.fast.upload.buffer=array` or
+`fs.s3a.fast.upload.buffer=bytebuffer`).
+
+More data is being generated than in the JVM than it can upload to S3 —and
+so much data has been buffered that the JVM has run out of memory.
 
-Be aware that logging HTTP headers may leak sensitive AWS account information,
-so should not be shared.
+1. Consult [S3A Fast Upload Thread Tuning](./index.html#fast_upload_thread_tuning) for
+detail on this issue and options to address it.
 
-## Advanced: network performance
+1. Switch to buffering to disk, rather than memory.
+
+
+This surfaces if, while a multipart upload was taking place, all outstanding multipart
+uploads were garbage collected. The upload operation cannot complete because
+the data uploaded has been deleted.
+
+Consult [Cleaning up After Incremental Upload Failures](./index.html#multipart_purge) for
+details on how the multipart purge timeout can be set. If multipart uploads
+are failing with the message above, it may be a sign that this value is too low.
+
+### `MultiObjectDeleteException` during delete or rename of files
+
+```
+Exception in thread "main" com.amazonaws.services.s3.model.MultiObjectDeleteException:
+    Status Code: 0, AWS Service: null, AWS Request ID: null, AWS Error Code: null,
+    AWS Error Message: One or more objects could not be deleted, S3 Extended Request ID: null
+  at com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:1745)
+```
+This happens when trying to delete multiple objects, and one of the objects
+could not be deleted. It *should not occur* just because the object is missing.
+More specifically: at the time this document was written, we could not create
+such a failure.
+
+It will occur if the caller lacks the permission to delete any of the objects.
+
+Consult the log to see the specifics of which objects could not be deleted.
+Do you have permission to do so?
+
+If this operation is failing for reasons other than the caller lacking
+permissions:
+
+1. Try setting `fs.s3a.multiobjectdelete.enable` to `false`.
+1. Consult [HADOOP-11572](https://issues.apache.org/jira/browse/HADOOP-11572)
+for up to date advice.
+
+### "Failed to Sanitize XML document"
+
+```
+org.apache.hadoop.fs.s3a.AWSClientIOException: getFileStatus on test/testname/streaming/:
+  com.amazonaws.AmazonClientException: Failed to sanitize XML document
+  destined for handler class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListBucketHandler:
+  Failed to sanitize XML document destined for handler class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListBucketHandler
+    at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:105)
+    at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1462)
+    at org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:1227)
+    at org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:1203)
+    at org.apache.hadoop.fs.s3a.S3AGlobber.listStatus(S3AGlobber.java:69)
+    at org.apache.hadoop.fs.s3a.S3AGlobber.doGlob(S3AGlobber.java:210)
+    at org.apache.hadoop.fs.s3a.S3AGlobber.glob(S3AGlobber.java:125)
+    at org.apache.hadoop.fs.s3a.S3AFileSystem.globStatus(S3AFileSystem.java:1853)
+    at org.apache.hadoop.fs.s3a.S3AFileSystem.globStatus(S3AFileSystem.java:1841)
+```
+
+We believe this is caused by the connection to S3 being broken.
+See [HADOOP-13811](https://issues.apache.org/jira/browse/HADOOP-13811).
+
+It may go away if the operation is retried.
+
+### JSON Parse Error from AWS SDK
+
+Sometimes a JSON Parse error is reported with the stack trace in the `com.amazonaws`,
+
+Again, we believe this is caused by the connection to S3 being broken.
+
+It may go away if the operation is retried.
+
+
+
+## Miscellaneous Errors
+
+### When writing data: "java.io.FileNotFoundException: Completing multi-part upload"
+
+
+```
+java.io.FileNotFoundException: Completing multi-part upload on fork-5/test/multipart/1c397ca6-9dfb-4ac1-9cf7-db666673246b: com.amazonaws.services.s3.model.AmazonS3Exception: The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchUpload; Request ID: 84FF8057174D9369), S3 Extended Request ID: Ij5Yn6Eq/qIERH4Z6Io3YL2t9/qNZ7z9gjPb1FrTtTovZ8k1MXqh+zCYYjqmfJ/fCY6E1+JR9jA=
+  at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
+  at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
+  at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
+  at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
+  at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
+  at com.amazonaws.services.s3.AmazonS3Client.completeMultipartUpload(AmazonS3Client.java:2705)
+  at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.complete(S3ABlockOutputStream.java:473)
+  at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.access$200(S3ABlockOutputStream.java:382)
+  at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:272)
+  at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
+  at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
+```
+
+### Issue: when writing data, HTTP Exceptions logged at info from `AmazonHttpClient`
+
+```
+[s3a-transfer-shared-pool4-t6] INFO  http.AmazonHttpClient (AmazonHttpClient.java:executeHelper(496)) - Unable to execute HTTP request: hwdev-steve-ireland-new.s3.amazonaws.com:443 failed to respond
+org.apache.http.NoHttpResponseException: bucket.s3.amazonaws.com:443 failed to respond
+  at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:143)
+  at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
+  at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
+  at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
+  at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:259)
+  at org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:209)
+  at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
+  at com.amazonaws.http.protocol.SdkHttpRequestExecutor.doReceiveResponse(SdkHttpRequestExecutor.java:66)
+  at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
+  at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:686)
+  at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:488)
+  at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:884)
+  at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
+  at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
+  at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:728)
+  at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
+  at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
+  at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
+  at com.amazonaws.services.s3.AmazonS3Client.copyPart(AmazonS3Client.java:1731)
+  at com.amazonaws.services.s3.transfer.internal.CopyPartCallable.call(CopyPartCallable.java:41)
+  at com.amazonaws.services.s3.transfer.internal.CopyPartCallable.call(CopyPartCallable.java:28)
+  at org.apache.hadoop.fs.s3a.BlockingThreadPoolExecutorService$CallableWithPermitRelease.call(BlockingThreadPoolExecutorService.java:239)
+  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
+  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
+  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
+  at java.lang.Thread.run(Thread.java:745)
+```
+
+These are HTTP I/O exceptions caught and logged inside the AWS SDK. The client
+will attempt to retry the operation; it may just be a transient event. If there
+are many such exceptions in logs, it may be a symptom of connectivity or network
+problems.
+
+## File System Semantics
+
+These are the issues where S3 does not appear to behave the way a filesystem
+"should".
+
+### Visible S3 Inconsistency
+
+Amazon S3 is *an eventually consistent object store*. That is: not a filesystem.
+
+To reduce visible inconsistencies, use the [S3Guard](./s3guard.html) consistency
+cache.
+
+
+By default, Amazon S3 offers read-after-create consistency: a newly created file
+is immediately visible.
+There is a small quirk: a negative GET may be cached, such
+that even if an object is immediately created, the fact that there "wasn't"
+an object is still remembered.
+
+That means the following sequence on its own will be consistent
+```
+touch(path) -> getFileStatus(path)
+```
+
+But this sequence *may* be inconsistent.
+
+```
+getFileStatus(path) -> touch(path) -> getFileStatus(path)
+```
+
+A common source of visible inconsistencies is that the S3 metadata
+database —the part of S3 which serves list requests— is updated asynchronously.
+Newly added or deleted files may not be visible in the index, even though direct
+operations on the object (`HEAD` and `GET`) succeed.
+
+That means the `getFileStatus()` and `open()` operations are more likely
+to be consistent with the state of the object store, but without S3Guard enabled,
+directory list operations such as `listStatus()`, `listFiles()`, `listLocatedStatus()`,
+and `listStatusIterator()` may not see newly created files, and still list
+old files.
+
+### `FileNotFoundException` even though the file was just written.
+
+This can be a sign of consistency problems. It may also surface if there is some
+asynchronous file write operation still in progress in the client: the operation
+has returned, but the write has not yet completed. While the S3A client code
+does block during the `close()` operation, we suspect that asynchronous writes
+may be taking place somewhere in the stack —this could explain why parallel tests
+fail more often than serialized tests.
+
+### File not found in a directory listing, even though `getFileStatus()` finds it
+
+(Similarly: deleted file found in listing, though `getFileStatus()` reports
+that it is not there)
+
+This is a visible sign of updates to the metadata server lagging
+behind the state of the underlying filesystem.
+
+Fix: Use S3Guard
+
+
+### File not visible/saved
+
+The files in an object store are not visible until the write has been completed.
+In-progress writes are simply saved to a local file/cached in RAM and only uploaded.
+at the end of a write operation. If a process terminated unexpectedly, or failed
+to call the `close()` method on an output stream, the pending data will have
+been lost.
+
+### File `flush()`, `hsync` and `hflush()` calls do not save data to S3
+
+Again, this is due to the fact that the data is cached locally until the
+`close()` operation. The S3A filesystem cannot be used as a store of data
+if it is required that the data is persisted durably after every
+`Syncable.hflush()` or `Syncable.hsync()` call.
+This includes resilient logging, HBase-style journalling
+and the like. The standard strategy here is to save to HDFS and then copy to S3.
+
+## <a name="encryption"></a> S3 Server Side Encryption
+
+### Using SSE-KMS "Invalid arn"
+
+When performing file operations, the user may run into an issue where the KMS
+key arn is invalid.
+```
+com.amazonaws.services.s3.model.AmazonS3Exception:
+Invalid arn (Service: Amazon S3; Status Code: 400; Error Code: KMS.NotFoundException; Request ID: 708284CF60EE233F),
+S3 Extended Request ID: iHUUtXUSiNz4kv3Bdk/hf9F+wjPt8GIVvBHx/HEfCBYkn7W6zmpvbA3XT7Y5nTzcZtfuhcqDunw=:
+Invalid arn (Service: Amazon S3; Status Code: 400; Error Code: KMS.NotFoundException; Request ID: 708284CF60EE233F)
+```
+
+This is due to either, the KMS key id is entered incorrectly, or the KMS key id
+is in a different region than the S3 bucket being used.
+
+### Using SSE-C "Bad Request"
+
+When performing file operations the user may run into an unexpected 400/403
+error such as
+```
+org.apache.hadoop.fs.s3a.AWSS3IOException: getFileStatus on fork-4/: com.amazonaws.services.s3.model.AmazonS3Exception:
+Bad Request (Service: Amazon S3; Status Code: 400;
+Error Code: 400 Bad Request; Request ID: 42F9A1987CB49A99),
+S3 Extended Request ID: jU2kcwaXnWj5APB14Cgb1IKkc449gu2+dhIsW/+7x9J4D+VUkKvu78mBo03oh9jnOT2eoTLdECU=:
+Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 42F9A1987CB49A99)
+```
+
+This can happen in the cases of not specifying the correct SSE-C encryption key.
+Such cases can be as follows:
+1. An object is encrypted using SSE-C on S3 and either the wrong encryption type
+is used, no encryption is specified, or the SSE-C specified is incorrect.
+2. A directory is encrypted with a SSE-C keyA and the user is trying to move a
+file using configured SSE-C keyB into that structure.
+
+## <a name="performance"></a> Performance
+
+S3 is slower to read data than HDFS, even on virtual clusters running on
+Amazon EC2.
+
+* HDFS replicates data for faster query performance.
+* HDFS stores the data on the local hard disks, avoiding network traffic
+ if the code can be executed on that host. As EC2 hosts often have their
+ network bandwidth throttled, this can make a tangible difference.
+* HDFS is significantly faster for many "metadata" operations: listing
+the contents of a directory, calling `getFileStatus()` on path,
+creating or deleting directories. (S3Guard reduces but does not eliminate
+the speed gap).
+* On HDFS, Directory renames and deletes are `O(1)` operations. On
+S3 renaming is a very expensive `O(data)` operation which may fail partway through
+in which case the final state depends on where the copy+ delete sequence was when it failed.
+All the objects are copied, then the original set of objects are deleted, so
+a failure should not lose data —it may result in duplicate datasets.
+* Unless fast upload enabled, the write only begins on a `close()` operation.
+This can take so long that some applications can actually time out.
+* File IO involving many seek calls/positioned read calls will encounter
+performance problems due to the size of the HTTP requests made. Enable the
+"random" fadvise policy to alleviate this at the
+expense of sequential read performance and bandwidth.
+
+The slow performance of `rename()` surfaces during the commit phase of work,
+including
+
+* The MapReduce `FileOutputCommitter`. This also used by Apache Spark.
+* DistCp's rename-after-copy operation.
+* The `hdfs fs -rm` command renaming the file under `.Trash` rather than
+deleting it. Use `-skipTrash` to eliminate that step.
+
+These operations can be significantly slower when S3 is the destination
+compared to HDFS or other "real" filesystem.
+
+*Improving S3 load-balancing behavior*
+
+Amazon S3 uses a set of front-end servers to provide access to the underlying data.
+The choice of which front-end server to use is handled via load-balancing DNS
+service: when the IP address of an S3 bucket is looked up, the choice of which
+IP address to return to the client is made based on the the current load
+of the front-end servers.
+
+Over time, the load across the front-end changes, so those servers considered
+"lightly loaded" will change. If the DNS value is cached for any length of time,
+your application may end up talking to an overloaded server. Or, in the case
+of failures, trying to talk to a server that is no longer there.
+
+And by default, for historical security reasons in the era of applets,
+the DNS TTL of a JVM is "infinity".
+
+To work with AWS better, set the DNS time-to-live of an application which
+works with S3 to something lower. See [AWS documentation](http://docs.aws.amazon.com/AWSSdkDocsJava/latest/DeveloperGuide/java-dg-jvm-ttl.html).
+
+## <a name="network_performance"></a>Troubleshooting network performance
 
 An example of this is covered in [HADOOP-13871](https://issues.apache.org/jira/browse/HADOOP-13871).
 
@@ -49,4 +650,74 @@ Consider reducing the connection timeout of the s3a connection.
   <value>15000</value>
 </property>
 ```
-This *may* cause the client to react faster to network pauses.
+This *may* cause the client to react faster to network pauses, so display
+stack traces fast. At the same time, it may be less resilient to
+connectivity problems.
+
+
+## Other Issues
+
+### <a name="logging"></a> Enabling low-level logging
+
+The AWS SDK and the Apache S3 components can be configured to log at
+more detail, as can S3A itself.
+
+```properties
+log4j.logger.org.apache.hadoop.fs.s3a=DEBUG
+log4j.logger.com.amazonaws.request=DEBUG
+log4j.logger.com.amazonaws.thirdparty.apache.http=DEBUG
+```
+
+If using the "unshaded" JAR, then the Apache HttpClient can be directly configured:
+
+```properties
+log4j.logger.org.apache.http=DEBUG
+```
+
+
+This produces a log such as this, wich is for a V4-authenticated PUT of a 0-byte file used
+as an empty directory marker
+
+```
+execchain.MainClientExec (MainClientExec.java:execute(255)) - Executing request PUT /test/ HTTP/1.1
+execchain.MainClientExec (MainClientExec.java:execute(266)) - Proxy auth state: UNCHALLENGED
+http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(135)) - http-outgoing-0 >> PUT /test/ HTTP/1.1
+http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(138)) - http-outgoing-0 >> Host: ireland-new.s3-eu-west-1.amazonaws.com
+http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(138)) - http-outgoing-0 >> x-amz-content-sha256: UNSIGNED-PAYLOAD
+http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(138)) - http-outgoing-0 >> Authorization: AWS4-HMAC-SHA256 Credential=AKIAIYZ5JEEEER/20170904/eu-west-1/s3/aws4_request,  ...
+http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(138)) - http-outgoing-0 >> X-Amz-Date: 20170904T172929Z
+http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(138)) - http-outgoing-0 >> User-Agent: Hadoop 3.0.0-beta-1, aws-sdk-java/1.11.134 ...
+http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(138)) - http-outgoing-0 >> amz-sdk-invocation-id: 75b530f8-ad31-1ad3-13db-9bd53666b30d
+http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(138)) - http-outgoing-0 >> amz-sdk-retry: 0/0/500
+http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(138)) - http-outgoing-0 >> Content-Type: application/octet-stream
+http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(138)) - http-outgoing-0 >> Content-Length: 0
+http.headers (LoggingManagedHttpClientConnection.java:onRequestSubmitted(138)) - http-outgoing-0 >> Connection: Keep-Alive
+http.wire (Wire.java:wire(72)) - http-outgoing-0 >> "PUT /test/ HTTP/1.1[\r][\n]"
+http.wire (Wire.java:wire(72)) - http-outgoing-0 >> "Host: ireland-new.s3-eu-west-1.amazonaws.com[\r][\n]"
+http.wire (Wire.java:wire(72)) - http-outgoing-0 >> "x-amz-content-sha256: UNSIGNED-PAYLOAD[\r][\n]"
+http.wire (Wire.java:wire(72)) - http-outgoing-0 >> "Authorization: AWS4-HMAC-SHA256 Credential=AKIAIYZ5JEEEER/20170904/eu-west-1/s3/aws4_request, ,,,
+http.wire (Wire.java:wire(72)) - http-outgoing-0 >> "X-Amz-Date: 20170904T172929Z[\r][\n]"
+http.wire (Wire.java:wire(72)) - http-outgoing-0 >> "User-Agent: 3.0.0-beta-1, aws-sdk-java/1.11.134  ...
+http.wire (Wire.java:wire(72)) - http-outgoing-0 >> "amz-sdk-invocation-id: 75b530f8-ad31-1ad3-13db-9bd53666b30d[\r][\n]"
+http.wire (Wire.java:wire(72)) - http-outgoing-0 >> "amz-sdk-retry: 0/0/500[\r][\n]"
+http.wire (Wire.java:wire(72)) - http-outgoing-0 >> "Content-Type: application/octet-stream[\r][\n]"
+http.wire (Wire.java:wire(72)) - http-outgoing-0 >> "Content-Length: 0[\r][\n]"
+http.wire (Wire.java:wire(72)) - http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]"
+http.wire (Wire.java:wire(72)) - http-outgoing-0 >> "[\r][\n]"
+http.wire (Wire.java:wire(72)) - http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]"
+http.wire (Wire.java:wire(72)) - http-outgoing-0 << "x-amz-id-2: mad9GqKztzlL0cdnCKAj9GJOAs+DUjbSC5jRkO7W1E7Nk2BUmFvt81bhSNPGdZmyyKqQI9i/B/A=[\r][\n]"
+http.wire (Wire.java:wire(72)) - http-outgoing-0 << "x-amz-request-id: C953D2FE4ABF5C51[\r][\n]"
+http.wire (Wire.java:wire(72)) - http-outgoing-0 << "Date: Mon, 04 Sep 2017 17:29:30 GMT[\r][\n]"
+http.wire (Wire.java:wire(72)) - http-outgoing-0 << "ETag: "d41d8cd98f00b204e9800998ecf8427e"[\r][\n]"
+http.wire (Wire.java:wire(72)) - http-outgoing-0 << "Content-Length: 0[\r][\n]"
+http.wire (Wire.java:wire(72)) - http-outgoing-0 << "Server: AmazonS3[\r][\n]"
+http.wire (Wire.java:wire(72)) - http-outgoing-0 << "[\r][\n]"
+http.headers (LoggingManagedHttpClientConnection.java:onResponseReceived(124)) - http-outgoing-0 << HTTP/1.1 200 OK
+http.headers (LoggingManagedHttpClientConnection.java:onResponseReceived(127)) - http-outgoing-0 << x-amz-id-2: mad9GqKztzlL0cdnCKAj9GJOAs+DUjbSC5jRkO7W1E7Nk2BUmFvt81bhSNPGdZmyyKqQI9i/B/A=
+http.headers (LoggingManagedHttpClientConnection.java:onResponseReceived(127)) - http-outgoing-0 << x-amz-request-id: C953D2FE4ABF5C51
+http.headers (LoggingManagedHttpClientConnection.java:onResponseReceived(127)) - http-outgoing-0 << Date: Mon, 04 Sep 2017 17:29:30 GMT
+http.headers (LoggingManagedHttpClientConnection.java:onResponseReceived(127)) - http-outgoing-0 << ETag: "d41d8cd98f00b204e9800998ecf8427e"
+http.headers (LoggingManagedHttpClientConnection.java:onResponseReceived(127)) - http-outgoing-0 << Content-Length: 0
+http.headers (LoggingManagedHttpClientConnection.java:onResponseReceived(127)) - http-outgoing-0 << Server: AmazonS3
+execchain.MainClientExec (MainClientExec.java:execute(284)) - Connection can be kept alive for 60000 MILLISECONDS
+```

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
index 587dbbc..8da8b6a 100644
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
+++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
@@ -47,7 +47,6 @@ public class ITestS3AContractDistCp extends AbstractContractDistCpTest {
   protected Configuration createConfiguration() {
     Configuration newConf = super.createConfiguration();
     newConf.setLong(MULTIPART_SIZE, MULTIPART_SETTING);
-    newConf.setBoolean(FAST_UPLOAD, true);
     newConf.set(FAST_UPLOAD_BUFFER, FAST_UPLOAD_BUFFER_DISK);
     // patch in S3Guard options
     maybeEnableS3Guard(newConf);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractCreate.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractCreate.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractCreate.java
deleted file mode 100644
index 502cf5a..0000000
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractCreate.java
+++ /dev/null
@@ -1,41 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.contract.s3n;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.contract.AbstractContractCreateTest;
-import org.apache.hadoop.fs.contract.AbstractFSContract;
-import org.apache.hadoop.fs.contract.ContractTestUtils;
-
-/**
- * S3N contract tests creating files.
- */
-public class ITestS3NContractCreate extends AbstractContractCreateTest {
-
-  @Override
-  protected AbstractFSContract createContract(Configuration conf) {
-    return new NativeS3Contract(conf);
-  }
-
-  @Override
-  public void testOverwriteEmptyDirectory() throws Throwable {
-    ContractTestUtils.skip(
-        "blobstores can't distinguish empty directories from files");
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractDelete.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractDelete.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractDelete.java
deleted file mode 100644
index 675f979..0000000
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractDelete.java
+++ /dev/null
@@ -1,34 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.contract.s3n;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.contract.AbstractContractDeleteTest;
-import org.apache.hadoop.fs.contract.AbstractFSContract;
-
-/**
- * S3A contract tests covering deletes.
- */
-public class ITestS3NContractDelete extends AbstractContractDeleteTest {
-
-  @Override
-  protected AbstractFSContract createContract(Configuration conf) {
-    return new NativeS3Contract(conf);
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractMkdir.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractMkdir.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractMkdir.java
deleted file mode 100644
index 3c566f3..0000000
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractMkdir.java
+++ /dev/null
@@ -1,34 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.contract.s3n;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.contract.AbstractContractMkdirTest;
-import org.apache.hadoop.fs.contract.AbstractFSContract;
-
-/**
- * Test dir operations on S3.
- */
-public class ITestS3NContractMkdir extends AbstractContractMkdirTest {
-
-  @Override
-  protected AbstractFSContract createContract(Configuration conf) {
-    return new NativeS3Contract(conf);
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractOpen.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractOpen.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractOpen.java
deleted file mode 100644
index 7ebfc4e..0000000
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractOpen.java
+++ /dev/null
@@ -1,34 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.contract.s3n;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.contract.AbstractContractOpenTest;
-import org.apache.hadoop.fs.contract.AbstractFSContract;
-
-/**
- * S3N contract tests opening files.
- */
-public class ITestS3NContractOpen extends AbstractContractOpenTest {
-
-  @Override
-  protected AbstractFSContract createContract(Configuration conf) {
-    return new NativeS3Contract(conf);
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractRename.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractRename.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractRename.java
deleted file mode 100644
index effe9eb..0000000
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractRename.java
+++ /dev/null
@@ -1,35 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.contract.s3n;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.contract.AbstractContractRenameTest;
-import org.apache.hadoop.fs.contract.AbstractFSContract;
-
-/**
- * S3N contract tests covering rename.
- */
-public class ITestS3NContractRename extends AbstractContractRenameTest {
-
-  @Override
-  protected AbstractFSContract createContract(Configuration conf) {
-    return new NativeS3Contract(conf);
-  }
-
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractRootDir.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractRootDir.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractRootDir.java
deleted file mode 100644
index 3fdf868..0000000
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractRootDir.java
+++ /dev/null
@@ -1,35 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.contract.s3n;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest;
-import org.apache.hadoop.fs.contract.AbstractFSContract;
-
-/**
- * Root dir operations against an S3 bucket.
- */
-public class ITestS3NContractRootDir extends
-    AbstractContractRootDirectoryTest {
-
-  @Override
-  protected AbstractFSContract createContract(Configuration conf) {
-    return new NativeS3Contract(conf);
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractSeek.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractSeek.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractSeek.java
deleted file mode 100644
index 9e1ce73..0000000
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractSeek.java
+++ /dev/null
@@ -1,34 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.contract.s3n;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.contract.AbstractContractSeekTest;
-import org.apache.hadoop.fs.contract.AbstractFSContract;
-
-/**
- * S3N contract tests covering file seek.
- */
-public class ITestS3NContractSeek extends AbstractContractSeekTest {
-
-  @Override
-  protected AbstractFSContract createContract(Configuration conf) {
-    return new NativeS3Contract(conf);
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/NativeS3Contract.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/NativeS3Contract.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/NativeS3Contract.java
deleted file mode 100644
index 5796d88..0000000
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/NativeS3Contract.java
+++ /dev/null
@@ -1,50 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.contract.s3n;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.contract.AbstractBondedFSContract;
-
-/**
- * The contract of S3N: only enabled if the test bucket is provided.
- */
-public class NativeS3Contract extends AbstractBondedFSContract {
-
-  public static final String CONTRACT_XML = "contract/s3n.xml";
-
-
-  public NativeS3Contract(Configuration conf) {
-    super(conf);
-    //insert the base features
-    addConfResource(CONTRACT_XML);
-  }
-
-  @Override
-  public String getScheme() {
-    return "s3n";
-  }
-
-  @Override
-  public Path getTestPath() {
-    String testUniqueForkId = System.getProperty("test.unique.fork.id");
-    return testUniqueForkId == null ? super.getTestPath() :
-        new Path("/" + testUniqueForkId, "test");
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABlockOutputArray.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABlockOutputArray.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABlockOutputArray.java
index 87f676c..afd3ec2 100644
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABlockOutputArray.java
+++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABlockOutputArray.java
@@ -56,7 +56,6 @@ public class ITestS3ABlockOutputArray extends AbstractS3ATestBase {
     S3ATestUtils.disableFilesystemCaching(conf);
     conf.setLong(MIN_MULTIPART_THRESHOLD, MULTIPART_MIN_SIZE);
     conf.setInt(MULTIPART_SIZE, MULTIPART_MIN_SIZE);
-    conf.setBoolean(Constants.FAST_UPLOAD, true);
     conf.set(FAST_UPLOAD_BUFFER, getBlockOutputBufferName());
     return conf;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSECBlockOutputStream.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSECBlockOutputStream.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSECBlockOutputStream.java
index afa0441..8991bad 100644
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSECBlockOutputStream.java
+++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSECBlockOutputStream.java
@@ -31,7 +31,6 @@ public class ITestS3AEncryptionSSECBlockOutputStream
   @Override
   protected Configuration createConfiguration() {
     Configuration conf = super.createConfiguration();
-    conf.setBoolean(Constants.FAST_UPLOAD, true);
     conf.set(Constants.FAST_UPLOAD_BUFFER,
         Constants.FAST_UPLOAD_BYTEBUFFER);
     conf.set(Constants.SERVER_SIDE_ENCRYPTION_KEY,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSEKMSUserDefinedKeyBlockOutputStream.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSEKMSUserDefinedKeyBlockOutputStream.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSEKMSUserDefinedKeyBlockOutputStream.java
index 8ce3a13..4c953bd 100644
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSEKMSUserDefinedKeyBlockOutputStream.java
+++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSEKMSUserDefinedKeyBlockOutputStream.java
@@ -39,9 +39,7 @@ public class ITestS3AEncryptionSSEKMSUserDefinedKeyBlockOutputStream
       skip(Constants.SERVER_SIDE_ENCRYPTION_KEY+ " is not set for " +
           S3AEncryptionMethods.SSE_KMS.getMethod());
     }
-    conf.setBoolean(Constants.FAST_UPLOAD, true);
-    conf.set(Constants.FAST_UPLOAD_BUFFER,
-        Constants.FAST_UPLOAD_BYTEBUFFER);
+    conf.set(Constants.FAST_UPLOAD_BUFFER, Constants.FAST_UPLOAD_BYTEBUFFER);
     return conf;
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSES3BlockOutputStream.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSES3BlockOutputStream.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSES3BlockOutputStream.java
index 407601f..ff9c07a 100644
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSES3BlockOutputStream.java
+++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSES3BlockOutputStream.java
@@ -29,7 +29,6 @@ public class ITestS3AEncryptionSSES3BlockOutputStream
   @Override
   protected Configuration createConfiguration() {
     Configuration conf = super.createConfiguration();
-    conf.setBoolean(Constants.FAST_UPLOAD, true);
     conf.set(Constants.FAST_UPLOAD_BUFFER,
         Constants.FAST_UPLOAD_BYTEBUFFER);
     //must specify encryption key as empty because SSE-S3 does not allow it,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/AbstractSTestS3AHugeFiles.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/AbstractSTestS3AHugeFiles.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/AbstractSTestS3AHugeFiles.java
index 8b163cb..230dbad 100644
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/AbstractSTestS3AHugeFiles.java
+++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/AbstractSTestS3AHugeFiles.java
@@ -98,7 +98,6 @@ public abstract class AbstractSTestS3AHugeFiles extends S3AScaleTestBase {
     conf.setLong(MIN_MULTIPART_THRESHOLD, partitionSize);
     conf.setInt(MULTIPART_SIZE, partitionSize);
     conf.set(USER_AGENT_PREFIX, "STestS3AHugeFileCreate");
-    conf.setBoolean(FAST_UPLOAD, true);
     conf.set(FAST_UPLOAD_BUFFER, getBlockOutputBufferName());
     return conf;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AHugeFilesClassicOutput.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AHugeFilesClassicOutput.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AHugeFilesClassicOutput.java
deleted file mode 100644
index 551956b..0000000
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AHugeFilesClassicOutput.java
+++ /dev/null
@@ -1,41 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.s3a.scale;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.s3a.Constants;
-
-/**
- * Use classic output for writing things; tweaks the configuration to do
- * this after it has been set up in the superclass.
- * The generator test has been copied and re
- */
-public class ITestS3AHugeFilesClassicOutput extends AbstractSTestS3AHugeFiles {
-
-  @Override
-  protected Configuration createScaleConfiguration() {
-    final Configuration conf = super.createScaleConfiguration();
-    conf.setBoolean(Constants.FAST_UPLOAD, false);
-    return conf;
-  }
-
-  protected String getBlockOutputBufferName() {
-    return "classic";
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/ITestInMemoryNativeS3FileSystemContract.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/ITestInMemoryNativeS3FileSystemContract.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/ITestInMemoryNativeS3FileSystemContract.java
deleted file mode 100644
index adbf950..0000000
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/ITestInMemoryNativeS3FileSystemContract.java
+++ /dev/null
@@ -1,33 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.s3native;
-
-import java.io.IOException;
-
-/**
- * S3N basic contract tests through mock in-memory S3 implementation.
- */
-public class ITestInMemoryNativeS3FileSystemContract
-    extends NativeS3FileSystemContractBaseTest {
-
-  @Override
-  NativeFileSystemStore getNativeFileSystemStore() throws IOException {
-    return new InMemoryNativeFileSystemStore();
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/ITestJets3tNativeFileSystemStore.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/ITestJets3tNativeFileSystemStore.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/ITestJets3tNativeFileSystemStore.java
deleted file mode 100644
index cfe622c..0000000
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/ITestJets3tNativeFileSystemStore.java
+++ /dev/null
@@ -1,133 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.s3native;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.Path;
-
-import static org.junit.Assert.*;
-import static org.junit.Assume.*;
-
-import org.junit.Before;
-import org.junit.After;
-import org.junit.BeforeClass;
-import org.junit.Test;
-
-import java.io.BufferedInputStream;
-import java.io.BufferedOutputStream;
-import java.io.IOException;
-import java.io.InputStream;
-import java.io.OutputStream;
-import java.net.URI;
-import java.security.DigestInputStream;
-import java.security.DigestOutputStream;
-import java.security.MessageDigest;
-import java.security.NoSuchAlgorithmException;
-
-/**
- * S3N tests through live S3 service.
- */
-public class ITestJets3tNativeFileSystemStore {
-  private Configuration conf;
-  private Jets3tNativeFileSystemStore store;
-  private NativeS3FileSystem fs;
-
-  @Before
-  public void setUp() throws Exception {
-    conf = new Configuration();
-    store = new Jets3tNativeFileSystemStore();
-    fs = new NativeS3FileSystem(store);
-    conf.setBoolean("fs.s3n.multipart.uploads.enabled", true);
-    conf.setLong("fs.s3n.multipart.uploads.block.size", 64 * 1024 * 1024);
-    fs.initialize(URI.create(conf.get("test.fs.s3n.name")), conf);
-  }
-
-  @After
-  public void tearDown() throws Exception {
-    try {
-      store.purge("test");
-    } catch (Exception e) {}
-  }
-
-  @BeforeClass
-  public static void checkSettings() throws Exception {
-    Configuration conf = new Configuration();
-    assumeNotNull(conf.get("fs.s3n.awsAccessKeyId"));
-    assumeNotNull(conf.get("fs.s3n.awsSecretAccessKey"));
-    assumeNotNull(conf.get("test.fs.s3n.name"));
-  }
-
-  protected void writeRenameReadCompare(Path path, long len)
-      throws IOException, NoSuchAlgorithmException {
-    // If len > fs.s3n.multipart.uploads.block.size,
-    // we'll use a multipart upload copy
-    MessageDigest digest = MessageDigest.getInstance("MD5");
-    OutputStream out = new BufferedOutputStream(
-        new DigestOutputStream(fs.create(path, false), digest));
-    for (long i = 0; i < len; i++) {
-      out.write('Q');
-    }
-    out.flush();
-    out.close();
-
-    assertTrue("Exists", fs.exists(path));
-
-    // Depending on if this file is over 5 GB or not,
-    // rename will cause a multipart upload copy
-    Path copyPath = path.suffix(".copy");
-    fs.rename(path, copyPath);
-
-    assertTrue("Copy exists", fs.exists(copyPath));
-
-    // Download file from S3 and compare the digest against the original
-    MessageDigest digest2 = MessageDigest.getInstance("MD5");
-    InputStream in = new BufferedInputStream(
-        new DigestInputStream(fs.open(copyPath), digest2));
-    long copyLen = 0;
-    while (in.read() != -1) {
-      copyLen++;
-    }
-    in.close();
-
-    assertEquals("Copy length matches original", len, copyLen);
-    assertArrayEquals("Digests match", digest.digest(), digest2.digest());
-  }
-
-  @Test
-  public void testSmallUpload() throws IOException, NoSuchAlgorithmException {
-    // Regular upload, regular copy
-    writeRenameReadCompare(new Path("/test/small"), 16384);
-  }
-
-  @Test
-  public void testMediumUpload() throws IOException, NoSuchAlgorithmException {
-    // Multipart upload, regular copy
-    writeRenameReadCompare(new Path("/test/medium"), 33554432);    // 100 MB
-  }
-
-  /*
-  Enable Multipart upload to run this test
-  @Test
-  public void testExtraLargeUpload()
-      throws IOException, NoSuchAlgorithmException {
-    // Multipart upload, multipart copy
-    writeRenameReadCompare(new Path("/test/xlarge"), 5368709121L); // 5GB+1byte
-  }
-  */
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49467165/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/ITestJets3tNativeS3FileSystemContract.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/ITestJets3tNativeS3FileSystemContract.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/ITestJets3tNativeS3FileSystemContract.java
deleted file mode 100644
index e51eaf6..0000000
--- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/ITestJets3tNativeS3FileSystemContract.java
+++ /dev/null
@@ -1,33 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.s3native;
-
-import java.io.IOException;
-
-/**
- * S3N basic contract tests through live S3 service.
- */
-public class ITestJets3tNativeS3FileSystemContract
-    extends NativeS3FileSystemContractBaseTest {
-
-  @Override
-  NativeFileSystemStore getNativeFileSystemStore() throws IOException {
-    return new Jets3tNativeFileSystemStore();
-  }
-}


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org