You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by st...@apache.org on 2022/10/07 11:28:22 UTC

[hadoop] branch branch-3.3.5 updated: HADOOP-18442. Remove openstack support (#4855)

This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.5
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3.5 by this push:
     new 8e8bc037aa6 HADOOP-18442. Remove openstack support (#4855)
8e8bc037aa6 is described below

commit 8e8bc037aa6c6953cf7f3923dee3eff59dbb963a
Author: Steve Loughran <st...@cloudera.com>
AuthorDate: Fri Oct 7 12:03:08 2022 +0100

    HADOOP-18442. Remove openstack support (#4855)
    
    The swift:// connector for openstack support has been removed.
    The hadoop-openstack jar remains, only now it is empty of code.
    This is to ensure that projects which declare the JAR a dependency
    will still have successful builds.
    
    Contributed by Steve Loughran
---
 .../hadoop-cloud-storage/pom.xml                   |    5 -
 .../dev-support/findbugsExcludeFile.xml            |   15 -
 .../src/main/resources/core-default.xml            |   14 -
 .../src/site/markdown/FileSystemShell.md           |    2 +-
 .../src/site/markdown/filesystem/filesystem.md     |    4 +-
 .../src/site/markdown/filesystem/introduction.md   |   12 +-
 .../src/site/markdown/filesystem/testing.md        |   51 -
 .../hadoop/conf/TestCommonConfigurationFields.java |    4 +-
 hadoop-project/pom.xml                             |    1 +
 hadoop-project/src/site/site.xml                   |    1 -
 .../hadoop-distcp/src/site/markdown/DistCp.md.vm   |    4 +-
 .../dev-support/findbugs-exclude.xml               |   34 -
 hadoop-tools/hadoop-openstack/pom.xml              |   93 +-
 .../fs/swift/auth/ApiKeyAuthenticationRequest.java |   66 -
 .../hadoop/fs/swift/auth/ApiKeyCredentials.java    |   87 -
 .../fs/swift/auth/AuthenticationRequest.java       |   57 -
 .../swift/auth/AuthenticationRequestWrapper.java   |   59 -
 .../fs/swift/auth/AuthenticationResponse.java      |   69 -
 .../fs/swift/auth/AuthenticationWrapper.java       |   47 -
 .../hadoop/fs/swift/auth/KeyStoneAuthRequest.java  |   59 -
 .../fs/swift/auth/KeystoneApiKeyCredentials.java   |   66 -
 .../swift/auth/PasswordAuthenticationRequest.java  |   62 -
 .../hadoop/fs/swift/auth/PasswordCredentials.java  |   86 -
 .../org/apache/hadoop/fs/swift/auth/Roles.java     |   97 -
 .../hadoop/fs/swift/auth/entities/AccessToken.java |  107 --
 .../hadoop/fs/swift/auth/entities/Catalog.java     |  107 --
 .../hadoop/fs/swift/auth/entities/Endpoint.java    |  194 --
 .../hadoop/fs/swift/auth/entities/Tenant.java      |  107 --
 .../apache/hadoop/fs/swift/auth/entities/User.java |  132 --
 .../SwiftAuthenticationFailedException.java        |   48 -
 .../swift/exceptions/SwiftBadRequestException.java |   49 -
 .../exceptions/SwiftConfigurationException.java    |   33 -
 .../exceptions/SwiftConnectionClosedException.java |   36 -
 .../swift/exceptions/SwiftConnectionException.java |   35 -
 .../hadoop/fs/swift/exceptions/SwiftException.java |   43 -
 .../exceptions/SwiftInternalStateException.java    |   38 -
 .../exceptions/SwiftInvalidResponseException.java  |  118 --
 .../exceptions/SwiftJsonMarshallingException.java  |   33 -
 .../exceptions/SwiftOperationFailedException.java  |   35 -
 .../exceptions/SwiftThrottledRequestException.java |   37 -
 .../SwiftUnsupportedFeatureException.java          |   30 -
 .../apache/hadoop/fs/swift/http/CopyRequest.java   |   41 -
 .../hadoop/fs/swift/http/ExceptionDiags.java       |   98 -
 .../hadoop/fs/swift/http/HttpBodyContent.java      |   45 -
 .../fs/swift/http/HttpInputStreamWithRelease.java  |  234 ---
 .../hadoop/fs/swift/http/RestClientBindings.java   |  225 ---
 .../fs/swift/http/SwiftProtocolConstants.java      |  270 ---
 .../hadoop/fs/swift/http/SwiftRestClient.java      | 1879 --------------------
 .../java/org/apache/hadoop/fs/swift/package.html   |   81 -
 .../swift/snative/StrictBufferedFSInputStream.java |   49 -
 .../hadoop/fs/swift/snative/SwiftFileStatus.java   |  102 --
 .../fs/swift/snative/SwiftNativeFileSystem.java    |  761 --------
 .../swift/snative/SwiftNativeFileSystemStore.java  |  986 ----------
 .../fs/swift/snative/SwiftNativeInputStream.java   |  385 ----
 .../fs/swift/snative/SwiftNativeOutputStream.java  |  389 ----
 .../fs/swift/snative/SwiftObjectFileStatus.java    |  115 --
 .../org/apache/hadoop/fs/swift/util/Duration.java  |   57 -
 .../apache/hadoop/fs/swift/util/DurationStats.java |  154 --
 .../hadoop/fs/swift/util/DurationStatsTable.java   |   77 -
 .../hadoop/fs/swift/util/HttpResponseUtils.java    |  121 --
 .../org/apache/hadoop/fs/swift/util/JSONUtil.java  |  124 --
 .../hadoop/fs/swift/util/SwiftObjectPath.java      |  187 --
 .../hadoop/fs/swift/util/SwiftTestUtils.java       |  547 ------
 .../apache/hadoop/fs/swift/util/SwiftUtils.java    |  216 ---
 .../hadoop-openstack/src/site/markdown/index.md    |  549 ------
 .../src/site/resources/css/site.css                |   30 -
 hadoop-tools/hadoop-openstack/src/site/site.xml    |   46 -
 .../apache/hadoop/fs/swift/AcceptAllFilter.java    |   31 -
 .../hadoop/fs/swift/SwiftFileSystemBaseTest.java   |  400 -----
 .../apache/hadoop/fs/swift/SwiftTestConstants.java |   34 -
 .../hadoop/fs/swift/TestFSMainOperationsSwift.java |  372 ----
 .../apache/hadoop/fs/swift/TestLogResources.java   |   63 -
 .../apache/hadoop/fs/swift/TestReadPastBuffer.java |  163 --
 .../java/org/apache/hadoop/fs/swift/TestSeek.java  |  260 ---
 .../apache/hadoop/fs/swift/TestSwiftConfig.java    |  194 --
 .../fs/swift/TestSwiftFileSystemBasicOps.java      |  296 ---
 .../fs/swift/TestSwiftFileSystemBlockLocation.java |  167 --
 .../fs/swift/TestSwiftFileSystemBlocksize.java     |   60 -
 .../fs/swift/TestSwiftFileSystemConcurrency.java   |  105 --
 .../fs/swift/TestSwiftFileSystemContract.java      |  138 --
 .../hadoop/fs/swift/TestSwiftFileSystemDelete.java |   90 -
 .../fs/swift/TestSwiftFileSystemDirectories.java   |  141 --
 .../swift/TestSwiftFileSystemExtendedContract.java |  143 --
 .../fs/swift/TestSwiftFileSystemLsOperations.java  |  169 --
 .../TestSwiftFileSystemPartitionedUploads.java     |  442 -----
 .../hadoop/fs/swift/TestSwiftFileSystemRead.java   |   94 -
 .../hadoop/fs/swift/TestSwiftFileSystemRename.java |  275 ---
 .../hadoop/fs/swift/TestSwiftObjectPath.java       |  171 --
 .../hadoop/fs/swift/contract/SwiftContract.java    |   44 -
 .../fs/swift/contract/TestSwiftContractCreate.java |   37 -
 .../fs/swift/contract/TestSwiftContractDelete.java |   31 -
 .../fs/swift/contract/TestSwiftContractMkdir.java  |   34 -
 .../fs/swift/contract/TestSwiftContractOpen.java   |   42 -
 .../fs/swift/contract/TestSwiftContractRename.java |   32 -
 .../swift/contract/TestSwiftContractRootDir.java   |   35 -
 .../fs/swift/contract/TestSwiftContractSeek.java   |   31 -
 .../hdfs2/TestSwiftFileSystemDirectoriesHdfs2.java |   43 -
 .../hadoop/fs/swift/hdfs2/TestV2LsOperations.java  |  129 --
 .../fs/swift/http/TestRestClientBindings.java      |  198 ---
 .../hadoop/fs/swift/http/TestSwiftRestClient.java  |  117 --
 .../hadoop/fs/swift/scale/SwiftScaleTestBase.java  |   37 -
 .../fs/swift/scale/TestWriteManySmallFiles.java    |   97 -
 .../src/test/resources/contract/swift.xml          |  105 --
 .../src/test/resources/core-site.xml               |   51 -
 .../src/test/resources/log4j.properties            |   39 -
 hadoop-tools/hadoop-tools-dist/pom.xml             |    6 -
 106 files changed, 17 insertions(+), 14844 deletions(-)

diff --git a/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml b/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
index 4eecf7ebca1..ba6b1fdf4f2 100644
--- a/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
+++ b/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
@@ -123,11 +123,6 @@
       <artifactId>hadoop-azure-datalake</artifactId>
       <scope>compile</scope>
     </dependency>
-    <dependency>
-      <groupId>org.apache.hadoop</groupId>
-      <artifactId>hadoop-openstack</artifactId>
-      <scope>compile</scope>
-    </dependency>
     <dependency>
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-cos</artifactId>
diff --git a/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml b/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
index 23e39d055ff..b885891af73 100644
--- a/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
+++ b/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
@@ -379,21 +379,6 @@
        <Bug code="JLM" />
      </Match>
 
-  <!--
-  OpenStack Swift FS module -closes streams in a different method
-  from where they are opened.
-  -->
-    <Match>
-      <Class name="org.apache.hadoop.fs.swift.snative.SwiftNativeOutputStream"/>
-      <Method name="uploadFileAttempt"/>
-      <Bug pattern="OBL_UNSATISFIED_OBLIGATION"/>
-    </Match>
-    <Match>
-      <Class name="org.apache.hadoop.fs.swift.snative.SwiftNativeOutputStream"/>
-      <Method name="uploadFilePartAttempt"/>
-      <Bug pattern="OBL_UNSATISFIED_OBLIGATION"/>
-    </Match>
-
      <!-- code from maven source, null value is checked at callee side. -->
      <Match>
        <Class name="org.apache.hadoop.util.ComparableVersion$ListItem" />
diff --git a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index 47b3e592999..d5cb5cceecc 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -1072,14 +1072,6 @@
   </description>
 </property>
 
-<property>
-  <name>fs.viewfs.overload.scheme.target.swift.impl</name>
-  <value>org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem</value>
-  <description>The SwiftNativeFileSystem for view file system overload scheme
-   when child file system and ViewFSOverloadScheme's schemes are swift.
-  </description>
-</property>
-
 <property>
   <name>fs.viewfs.overload.scheme.target.oss.impl</name>
   <value>org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem</value>
@@ -1189,12 +1181,6 @@
   <description>File space usage statistics refresh interval in msec.</description>
 </property>
 
-<property>
-  <name>fs.swift.impl</name>
-  <value>org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem</value>
-  <description>The implementation class of the OpenStack Swift Filesystem</description>
-</property>
-
 <property>
   <name>fs.automatic.close</name>
   <value>true</value>
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md b/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
index b1e4652f613..382a6df2104 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
@@ -852,7 +852,7 @@ Return the help for an individual command.
 ====================================================
 
 The Hadoop FileSystem shell works with Object Stores such as Amazon S3,
-Azure WASB and OpenStack Swift.
+Azure ABFS and Google GCS.
 
 
 
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
index 004220c4bed..9fd14f22189 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
@@ -701,7 +701,7 @@ The behavior of the returned stream is covered in [Output](outputstream.html).
  clients creating files with `overwrite==true` to fail if the file is created
  by another client between the two tests.
 
-* S3A, Swift and potentially other Object Stores do not currently change the `FS` state
+* The S3A and potentially other Object Stores connectors not currently change the `FS` state
 until the output stream `close()` operation is completed.
 This is a significant difference between the behavior of object stores
 and that of filesystems, as it allows &gt;1 client to create a file with `overwrite=false`,
@@ -1225,7 +1225,7 @@ the parent directories of the destination then exist:
 There is a check for and rejection if the `parent(dest)` is a file, but
 no checks for any other ancestors.
 
-*Other Filesystems (including Swift) *
+*Other Filesystems*
 
 Other filesystems strictly reject the operation, raising a `FileNotFoundException`
 
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
index 903d2bb90ff..76782b45409 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
@@ -30,8 +30,8 @@ are places where HDFS diverges from the expected behaviour of a POSIX
 filesystem.
 
 The bundled S3A FileSystem clients make Amazon's S3 Object Store ("blobstore")
-accessible through the FileSystem API. The Swift FileSystem driver provides similar
-functionality for the OpenStack Swift blobstore. The Azure WASB and ADL object
+accessible through the FileSystem API. 
+The Azure ABFS, WASB and ADL object
 storage FileSystems talks to Microsoft's Azure storage. All of these
 bind to object stores, which do have different behaviors, especially regarding
 consistency guarantees, and atomicity of operations.
@@ -314,10 +314,10 @@ child entries
 
 This specification refers to *Object Stores* in places, often using the
 term *Blobstore*. Hadoop does provide FileSystem client classes for some of these
-even though they violate many of the requirements. This is why, although
-Hadoop can read and write data in an object store, the two which Hadoop ships
-with direct support for &mdash; Amazon S3 and OpenStack Swift &mdash; cannot
-be used as direct replacements for HDFS.
+even though they violate many of the requirements.
+
+Consult the documentation for a specific store to determine its compatibility
+with specific applications and services.
 
 *What is an Object Store?*
 
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
index 4c6fa3ff0f6..53eb9870bc1 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
@@ -66,55 +66,6 @@ Example:
       </property>
     </configuration>
 
-
-### swift://
-
-The OpenStack Swift login details must be defined in the file
-`/hadoop-tools/hadoop-openstack/src/test/resources/contract-test-options.xml`.
-The standard hadoop-common `contract-test-options.xml` resource file cannot be
-used, as that file does not get included in `hadoop-common-test.jar`.
-
-
-In `/hadoop-tools/hadoop-openstack/src/test/resources/contract-test-options.xml`
-the Swift bucket name must be defined in the property `fs.contract.test.fs.swift`,
-along with the login details for the specific Swift service provider in which the
-bucket is posted.
-
-    <configuration>
-      <property>
-        <name>fs.contract.test.fs.swift</name>
-        <value>swift://swiftbucket.rackspace/</value>
-      </property>
-
-      <property>
-        <name>fs.swift.service.rackspace.auth.url</name>
-        <value>https://auth.api.rackspacecloud.com/v2.0/tokens</value>
-        <description>Rackspace US (multiregion)</description>
-      </property>
-
-      <property>
-        <name>fs.swift.service.rackspace.username</name>
-        <value>this-is-your-username</value>
-      </property>
-
-      <property>
-        <name>fs.swift.service.rackspace.region</name>
-        <value>DFW</value>
-      </property>
-
-      <property>
-        <name>fs.swift.service.rackspace.apikey</name>
-        <value>ab0bceyoursecretapikeyffef</value>
-      </property>
-
-    </configuration>
-
-1. Often the different public cloud Swift infrastructures exhibit different behaviors
-(authentication and throttling in particular). We recommand that testers create
-accounts on as many of these providers as possible and test against each of them.
-1. They can be slow, especially remotely. Remote links are also the most likely
-to make eventual-consistency behaviors visible, which is a mixed benefit.
-
 ## Testing a new filesystem
 
 The core of adding a new FileSystem to the contract tests is adding a
@@ -228,8 +179,6 @@ Passing all the FileSystem contract tests does not mean that a filesystem can be
 * Scalability: does it support files as large as HDFS, or as many in a single directory?
 * Durability: do files actually last -and how long for?
 
-Proof that this is is true is the fact that the Amazon S3 and OpenStack Swift object stores are eventually consistent object stores with non-atomic rename and delete operations. Single threaded test cases are unlikely to see some of the concurrency issues, while consistency is very often only visible in tests that span a datacenter.
-
 There are also some specific aspects of the use of the FileSystem API:
 
 * Compatibility with the `hadoop -fs` CLI.
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
index f3cfe032a45..8ca414400c8 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
@@ -139,7 +139,6 @@ public class TestCommonConfigurationFields extends TestConfigurationFieldsBase {
     xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.s3a.impl");
     xmlPropsToSkipCompare.
         add("fs.viewfs.overload.scheme.target.swebhdfs.impl");
-    xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.swift.impl");
     xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.webhdfs.impl");
     xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.wasb.impl");
 
@@ -220,8 +219,7 @@ public class TestCommonConfigurationFields extends TestConfigurationFieldsBase {
     xmlPropsToSkipCompare.add("hadoop.common.configuration.version");
     // - org.apache.hadoop.fs.FileSystem
     xmlPropsToSkipCompare.add("fs.har.impl.disable.cache");
-    // - org.apache.hadoop.fs.FileSystem#getFileSystemClass()
-    xmlPropsToSkipCompare.add("fs.swift.impl");
+
     // - package org.apache.hadoop.tracing.TraceUtils ?
     xmlPropsToSkipCompare.add("hadoop.htrace.span.receiver.classes");
     // Private keys
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 0ed6fbede49..b2fd463d0e8 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -688,6 +688,7 @@
         <version>${hadoop.version}</version>
       </dependency>
 
+      <!-- This is empty; retained only for downstream app build compatibility. -->
       <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-openstack</artifactId>
diff --git a/hadoop-project/src/site/site.xml b/hadoop-project/src/site/site.xml
index e2d149da2eb..b53cbd2a056 100644
--- a/hadoop-project/src/site/site.xml
+++ b/hadoop-project/src/site/site.xml
@@ -182,7 +182,6 @@
       <item name="Azure Blob Storage" href="hadoop-azure/index.html"/>
       <item name="Azure Data Lake Storage"
             href="hadoop-azure-datalake/index.html"/>
-      <item name="OpenStack Swift" href="hadoop-openstack/index.html"/>
       <item name="Tencent COS" href="hadoop-cos/cloud-storage/index.html"/>
     </menu>
 
diff --git a/hadoop-tools/hadoop-distcp/src/site/markdown/DistCp.md.vm b/hadoop-tools/hadoop-distcp/src/site/markdown/DistCp.md.vm
index 0e7d67f24b3..560ec55d2b2 100644
--- a/hadoop-tools/hadoop-distcp/src/site/markdown/DistCp.md.vm
+++ b/hadoop-tools/hadoop-distcp/src/site/markdown/DistCp.md.vm
@@ -580,7 +580,7 @@ $H3 MapReduce and other side-effects
 
 $H3 DistCp and Object Stores
 
-DistCp works with Object Stores such as Amazon S3, Azure WASB and OpenStack Swift.
+DistCp works with Object Stores such as Amazon S3, Azure ABFS and Google GCS.
 
 Prequisites
 
@@ -623,7 +623,7 @@ And to use `-update` to only copy changed files.
 
 ```bash
 hadoop distcp -update -numListstatusThreads 20  \
-  swift://history.cluster1/2016 \
+  s3a://history/2016 \
   hdfs://nn1:8020/history/2016
 ```
 
diff --git a/hadoop-tools/hadoop-openstack/dev-support/findbugs-exclude.xml b/hadoop-tools/hadoop-openstack/dev-support/findbugs-exclude.xml
deleted file mode 100644
index cfb75c73081..00000000000
--- a/hadoop-tools/hadoop-openstack/dev-support/findbugs-exclude.xml
+++ /dev/null
@@ -1,34 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-  ~ Licensed under the Apache License, Version 2.0 (the "License");
-  ~   you may not use this file except in compliance with the License.
-  ~   You may obtain a copy of the License at
-  ~   
-  ~    http://www.apache.org/licenses/LICENSE-2.0
-  ~   
-  ~   Unless required by applicable law or agreed to in writing, software
-  ~   distributed under the License is distributed on an "AS IS" BASIS,
-  ~   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  ~   See the License for the specific language governing permissions and
-  ~   limitations under the License. See accompanying LICENSE file.
-  -->
-<FindBugsFilter>
-
-  <!--
-  OpenStack Swift FS module -closes streams in a different method
-  from where they are opened.
-  -->
-  <Match>
-    <Class name="org.apache.hadoop.fs.swift.snative.SwiftNativeOutputStream"/>
-    <Method name="uploadFileAttempt"/>
-    <Bug pattern="OBL_UNSATISFIED_OBLIGATION"/>
-    <Bug code="OBL"/>
-  </Match>
-  <Match>
-    <Class name="org.apache.hadoop.fs.swift.snative.SwiftNativeOutputStream"/>
-    <Method name="uploadFilePartAttempt"/>
-    <Bug pattern="OBL_UNSATISFIED_OBLIGATION"/>
-    <Bug code="OBL"/>
-  </Match>
-
-</FindBugsFilter>
diff --git a/hadoop-tools/hadoop-openstack/pom.xml b/hadoop-tools/hadoop-openstack/pom.xml
index 8d667f9e54b..e94319f9ef2 100644
--- a/hadoop-tools/hadoop-openstack/pom.xml
+++ b/hadoop-tools/hadoop-openstack/pom.xml
@@ -26,9 +26,10 @@
   <version>3.3.5-SNAPSHOT</version>
   <name>Apache Hadoop OpenStack support</name>
   <description>
-    This module contains code to support integration with OpenStack.
-    Currently this consists of a filesystem client to read data from
-    and write data to an OpenStack Swift object store.
+    This module used to contain code to support integration with OpenStack.
+    It has been deleted as unsupported; the JAR is still published so as to
+    not break applications which declare an explicit maven/ivy/SBT dependency
+    on the module.
   </description>
   <packaging>jar</packaging>
 
@@ -37,32 +38,6 @@
     <downloadSources>true</downloadSources>
   </properties>
 
-  <profiles>
-    <profile>
-      <id>tests-off</id>
-      <activation>
-        <file>
-          <missing>src/test/resources/auth-keys.xml</missing>
-        </file>
-      </activation>
-      <properties>
-        <maven.test.skip>true</maven.test.skip>
-      </properties>
-    </profile>
-    <profile>
-      <id>tests-on</id>
-      <activation>
-        <file>
-          <exists>src/test/resources/auth-keys.xml</exists>
-        </file>
-      </activation>
-      <properties>
-        <maven.test.skip>false</maven.test.skip>
-      </properties>
-    </profile>
-
-  </profiles>
-
   <build>
     <plugins>
       <plugin>
@@ -70,71 +45,11 @@
         <artifactId>spotbugs-maven-plugin</artifactId>
         <configuration>
           <xmlOutput>true</xmlOutput>
-          <excludeFilterFile>${basedir}/dev-support/findbugs-exclude.xml
-          </excludeFilterFile>
           <effort>Max</effort>
         </configuration>
       </plugin>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-dependency-plugin</artifactId>
-        <executions>
-          <execution>
-            <id>deplist</id>
-            <phase>compile</phase>
-            <goals>
-              <goal>list</goal>
-            </goals>
-            <configuration>
-              <!-- build a shellprofile -->
-              <outputFile>${project.basedir}/target/hadoop-tools-deps/${project.artifactId}.tools-optional.txt</outputFile>
-            </configuration>
-          </execution>
-        </executions>
-      </plugin>
     </plugins>
   </build>
 
-  <dependencies>
-    <dependency>
-      <groupId>org.apache.hadoop</groupId>
-      <artifactId>hadoop-common</artifactId>
-      <scope>compile</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.hadoop</groupId>
-      <artifactId>hadoop-common</artifactId>
-      <scope>test</scope>
-      <type>test-jar</type>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.hadoop</groupId>
-      <artifactId>hadoop-annotations</artifactId>
-      <scope>compile</scope>
-    </dependency>
-
-    <dependency>
-      <groupId>org.apache.httpcomponents</groupId>
-      <artifactId>httpcore</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>commons-logging</groupId>
-      <artifactId>commons-logging</artifactId>
-      <scope>compile</scope>
-    </dependency>
 
-    <dependency>
-      <groupId>junit</groupId>
-      <artifactId>junit</artifactId>
-      <scope>provided</scope>
-    </dependency>
-    <dependency>
-      <groupId>com.fasterxml.jackson.core</groupId>
-      <artifactId>jackson-annotations</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>com.fasterxml.jackson.core</groupId>
-      <artifactId>jackson-databind</artifactId>
-    </dependency>
-  </dependencies>
 </project>
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/ApiKeyAuthenticationRequest.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/ApiKeyAuthenticationRequest.java
deleted file mode 100644
index e25d17d2fb8..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/ApiKeyAuthenticationRequest.java
+++ /dev/null
@@ -1,66 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.auth;
-
-import com.fasterxml.jackson.annotation.JsonProperty;
-
-/**
- * Class that represents authentication request to Openstack Keystone.
- * Contains basic authentication information.
- * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
- * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS
- */
-public class ApiKeyAuthenticationRequest extends AuthenticationRequest {
-  /**
-   * Credentials for login
-   */
-  private ApiKeyCredentials apiKeyCredentials;
-
-  /**
-   * API key auth
-   * @param tenantName tenant
-   * @param apiKeyCredentials credentials
-   */
-  public ApiKeyAuthenticationRequest(String tenantName, ApiKeyCredentials apiKeyCredentials) {
-    this.tenantName = tenantName;
-    this.apiKeyCredentials = apiKeyCredentials;
-  }
-
-  /**
-   * @return credentials for login into Keystone
-   */
-  @JsonProperty("RAX-KSKEY:apiKeyCredentials")
-  public ApiKeyCredentials getApiKeyCredentials() {
-    return apiKeyCredentials;
-  }
-
-  /**
-   * @param apiKeyCredentials credentials for login into Keystone
-   */
-  public void setApiKeyCredentials(ApiKeyCredentials apiKeyCredentials) {
-    this.apiKeyCredentials = apiKeyCredentials;
-  }
-
-  @Override
-  public String toString() {
-    return "Auth as " +
-           "tenant '" + tenantName + "' "
-           + apiKeyCredentials;
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/ApiKeyCredentials.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/ApiKeyCredentials.java
deleted file mode 100644
index 412ce81daa3..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/ApiKeyCredentials.java
+++ /dev/null
@@ -1,87 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.auth;
-
-
-/**
- * Describes credentials to log in Swift using Keystone authentication.
- * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
- * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
- */
-public class ApiKeyCredentials {
-  /**
-   * user login
-   */
-  private String username;
-
-  /**
-   * user password
-   */
-  private String apikey;
-
-  /**
-   * default constructor
-   */
-  public ApiKeyCredentials() {
-  }
-
-  /**
-   * @param username user login
-   * @param apikey user api key
-   */
-  public ApiKeyCredentials(String username, String apikey) {
-    this.username = username;
-    this.apikey = apikey;
-  }
-
-  /**
-   * @return user api key
-   */
-  public String getApiKey() {
-    return apikey;
-  }
-
-  /**
-   * @param apikey user api key
-   */
-  public void setApiKey(String apikey) {
-    this.apikey = apikey;
-  }
-
-  /**
-   * @return login
-   */
-  public String getUsername() {
-    return username;
-  }
-
-  /**
-   * @param username login
-   */
-  public void setUsername(String username) {
-    this.username = username;
-  }
-
-  @Override
-  public String toString() {
-    return "user " +
-           "'" + username + '\'' +
-           " with key of length " + ((apikey == null) ? 0 : apikey.length());
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationRequest.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationRequest.java
deleted file mode 100644
index a2a3b55e76f..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationRequest.java
+++ /dev/null
@@ -1,57 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.auth;
-
-/**
- * Class that represents authentication request to Openstack Keystone.
- * Contains basic authentication information.
- * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
- * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
- */
-public class AuthenticationRequest {
-
-  /**
-   * tenant name
-   */
-  protected String tenantName;
-
-  public AuthenticationRequest() {
-  }
-
-  /**
-   * @return tenant name for Keystone authorization
-   */
-  public String getTenantName() {
-    return tenantName;
-  }
-
-  /**
-   * @param tenantName tenant name for authorization
-   */
-  public void setTenantName(String tenantName) {
-    this.tenantName = tenantName;
-  }
-
-  @Override
-  public String toString() {
-    return "AuthenticationRequest{" +
-           "tenantName='" + tenantName + '\'' +
-           '}';
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationRequestWrapper.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationRequestWrapper.java
deleted file mode 100644
index f30e90dad38..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationRequestWrapper.java
+++ /dev/null
@@ -1,59 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.auth;
-
-/**
- * This class is used for correct hierarchy mapping of
- * Keystone authentication model and java code.
- * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
- * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
- */
-public class AuthenticationRequestWrapper {
-  /**
-   * authentication request
-   */
-  private AuthenticationRequest auth;
-
-  /**
-   * default constructor used for json parsing
-   */
-  public AuthenticationRequestWrapper() {
-  }
-
-  /**
-   * @param auth authentication requests
-   */
-  public AuthenticationRequestWrapper(AuthenticationRequest auth) {
-    this.auth = auth;
-  }
-
-  /**
-   * @return authentication request
-   */
-  public AuthenticationRequest getAuth() {
-    return auth;
-  }
-
-  /**
-   * @param auth authentication request
-   */
-  public void setAuth(AuthenticationRequest auth) {
-    this.auth = auth;
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationResponse.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationResponse.java
deleted file mode 100644
index f09ec0c5fb9..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationResponse.java
+++ /dev/null
@@ -1,69 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.auth;
-
-import org.apache.hadoop.fs.swift.auth.entities.AccessToken;
-import org.apache.hadoop.fs.swift.auth.entities.Catalog;
-import org.apache.hadoop.fs.swift.auth.entities.User;
-
-import java.util.List;
-
-/**
- * Response from KeyStone deserialized into AuthenticationResponse class.
- * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
- * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
- */
-public class AuthenticationResponse {
-  private Object metadata;
-  private List<Catalog> serviceCatalog;
-  private User user;
-  private AccessToken token;
-
-  public Object getMetadata() {
-    return metadata;
-  }
-
-  public void setMetadata(Object metadata) {
-    this.metadata = metadata;
-  }
-
-  public List<Catalog> getServiceCatalog() {
-    return serviceCatalog;
-  }
-
-  public void setServiceCatalog(List<Catalog> serviceCatalog) {
-    this.serviceCatalog = serviceCatalog;
-  }
-
-  public User getUser() {
-    return user;
-  }
-
-  public void setUser(User user) {
-    this.user = user;
-  }
-
-  public AccessToken getToken() {
-    return token;
-  }
-
-  public void setToken(AccessToken token) {
-    this.token = token;
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationWrapper.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationWrapper.java
deleted file mode 100644
index 6f67a16715e..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationWrapper.java
+++ /dev/null
@@ -1,47 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.auth;
-
-/**
- * This class is used for correct hierarchy mapping of
- * Keystone authentication model and java code
- * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
- * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
- */
-public class AuthenticationWrapper {
-
-  /**
-   * authentication response field
-   */
-  private AuthenticationResponse access;
-
-  /**
-   * @return authentication response
-   */
-  public AuthenticationResponse getAccess() {
-    return access;
-  }
-
-  /**
-   * @param access sets authentication response
-   */
-  public void setAccess(AuthenticationResponse access) {
-    this.access = access;
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/KeyStoneAuthRequest.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/KeyStoneAuthRequest.java
deleted file mode 100644
index c3abbac88f4..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/KeyStoneAuthRequest.java
+++ /dev/null
@@ -1,59 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.auth;
-
-/**
- * Class that represents authentication to OpenStack Keystone.
- * Contains basic authentication information.
- * Used when {@link ApiKeyAuthenticationRequest} is not applicable.
- * (problem with different Keystone installations/versions/modifications)
- * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
- * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
- */
-public class KeyStoneAuthRequest extends AuthenticationRequest {
-
-  /**
-   * Credentials for Keystone authentication
-   */
-  private KeystoneApiKeyCredentials apiAccessKeyCredentials;
-
-  /**
-   * @param tenant                  Keystone tenant name for authentication
-   * @param apiAccessKeyCredentials Credentials for authentication
-   */
-  public KeyStoneAuthRequest(String tenant, KeystoneApiKeyCredentials apiAccessKeyCredentials) {
-    this.apiAccessKeyCredentials = apiAccessKeyCredentials;
-    this.tenantName = tenant;
-  }
-
-  public KeystoneApiKeyCredentials getApiAccessKeyCredentials() {
-    return apiAccessKeyCredentials;
-  }
-
-  public void setApiAccessKeyCredentials(KeystoneApiKeyCredentials apiAccessKeyCredentials) {
-    this.apiAccessKeyCredentials = apiAccessKeyCredentials;
-  }
-
-  @Override
-  public String toString() {
-    return "KeyStoneAuthRequest as " +
-            "tenant '" + tenantName + "' "
-            + apiAccessKeyCredentials;
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/KeystoneApiKeyCredentials.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/KeystoneApiKeyCredentials.java
deleted file mode 100644
index 75202b3a6d2..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/KeystoneApiKeyCredentials.java
+++ /dev/null
@@ -1,66 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.auth;
-
-/**
- * Class for Keystone authentication.
- * Used when {@link ApiKeyCredentials} is not applicable
- * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
- * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
- */
-public class KeystoneApiKeyCredentials {
-
-  /**
-   * User access key
-   */
-  private String accessKey;
-
-  /**
-   * User access secret
-   */
-  private String secretKey;
-
-  public KeystoneApiKeyCredentials(String accessKey, String secretKey) {
-    this.accessKey = accessKey;
-    this.secretKey = secretKey;
-  }
-
-  public String getAccessKey() {
-    return accessKey;
-  }
-
-  public void setAccessKey(String accessKey) {
-    this.accessKey = accessKey;
-  }
-
-  public String getSecretKey() {
-    return secretKey;
-  }
-
-  public void setSecretKey(String secretKey) {
-    this.secretKey = secretKey;
-  }
-
-  @Override
-  public String toString() {
-    return "user " +
-            "'" + accessKey + '\'' +
-            " with key of length " + ((secretKey == null) ? 0 : secretKey.length());
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/PasswordAuthenticationRequest.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/PasswordAuthenticationRequest.java
deleted file mode 100644
index ee519f3f8da..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/PasswordAuthenticationRequest.java
+++ /dev/null
@@ -1,62 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.auth;
-
-/**
- * Class that represents authentication request to Openstack Keystone.
- * Contains basic authentication information.
- * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
- * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
- */
-public class PasswordAuthenticationRequest extends AuthenticationRequest {
-  /**
-   * Credentials for login
-   */
-  private PasswordCredentials passwordCredentials;
-
-  /**
-   * @param tenantName tenant
-   * @param passwordCredentials password credentials
-   */
-  public PasswordAuthenticationRequest(String tenantName, PasswordCredentials passwordCredentials) {
-    this.tenantName = tenantName;
-    this.passwordCredentials = passwordCredentials;
-  }
-
-  /**
-   * @return credentials for login into Keystone
-   */
-  public PasswordCredentials getPasswordCredentials() {
-    return passwordCredentials;
-  }
-
-  /**
-   * @param passwordCredentials credentials for login into Keystone
-   */
-  public void setPasswordCredentials(PasswordCredentials passwordCredentials) {
-    this.passwordCredentials = passwordCredentials;
-  }
-
-  @Override
-  public String toString() {
-    return "Authenticate as " +
-           "tenant '" + tenantName + "' "
-           + passwordCredentials;
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/PasswordCredentials.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/PasswordCredentials.java
deleted file mode 100644
index 40d8c77feb4..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/PasswordCredentials.java
+++ /dev/null
@@ -1,86 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.auth;
-
-
-/**
- * Describes credentials to log in Swift using Keystone authentication.
- * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
- * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
- */
-public class PasswordCredentials {
-  /**
-   * user login
-   */
-  private String username;
-
-  /**
-   * user password
-   */
-  private String password;
-
-  /**
-   * default constructor
-   */
-  public PasswordCredentials() {
-  }
-
-  /**
-   * @param username user login
-   * @param password user password
-   */
-  public PasswordCredentials(String username, String password) {
-    this.username = username;
-    this.password = password;
-  }
-
-  /**
-   * @return user password
-   */
-  public String getPassword() {
-    return password;
-  }
-
-  /**
-   * @param password user password
-   */
-  public void setPassword(String password) {
-    this.password = password;
-  }
-
-  /**
-   * @return login
-   */
-  public String getUsername() {
-    return username;
-  }
-
-  /**
-   * @param username login
-   */
-  public void setUsername(String username) {
-    this.username = username;
-  }
-
-  @Override
-  public String toString() {
-    return "PasswordCredentials{username='" + username + "'}";
-  }
-}
-
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/Roles.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/Roles.java
deleted file mode 100644
index 57f2fa6d451..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/Roles.java
+++ /dev/null
@@ -1,97 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.auth;
-
-/**
- * Describes user roles in Openstack system.
- * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
- * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
- */
-public class Roles {
-  /**
-   * role name
-   */
-  private String name;
-
-  /**
-   * This field user in RackSpace auth model
-   */
-  private String id;
-
-  /**
-   * This field user in RackSpace auth model
-   */
-  private String description;
-
-  /**
-   * Service id used in HP public Cloud
-   */
-  private String serviceId;
-
-  /**
-   * Service id used in HP public Cloud
-   */
-  private String tenantId;
-
-  /**
-   * @return role name
-   */
-  public String getName() {
-    return name;
-  }
-
-  /**
-   * @param name role name
-   */
-  public void setName(String name) {
-    this.name = name;
-  }
-
-  public String getId() {
-    return id;
-  }
-
-  public void setId(String id) {
-    this.id = id;
-  }
-
-  public String getDescription() {
-    return description;
-  }
-
-  public void setDescription(String description) {
-    this.description = description;
-  }
-
-  public String getServiceId() {
-    return serviceId;
-  }
-
-  public void setServiceId(String serviceId) {
-    this.serviceId = serviceId;
-  }
-
-  public String getTenantId() {
-    return tenantId;
-  }
-
-  public void setTenantId(String tenantId) {
-    this.tenantId = tenantId;
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/AccessToken.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/AccessToken.java
deleted file mode 100644
index b38d4660e5a..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/AccessToken.java
+++ /dev/null
@@ -1,107 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.auth.entities;
-
-import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
-
-/**
- * Access token representation of Openstack Keystone authentication.
- * Class holds token id, tenant and expiration time.
- * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
- * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
- *
- * Example:
- * <pre>
- * "token" : {
- *   "RAX-AUTH:authenticatedBy" : [ "APIKEY" ],
- *   "expires" : "2013-07-12T05:19:24.685-05:00",
- *   "id" : "8bbea4215113abdab9d4c8fb0d37",
- *   "tenant" : { "id" : "01011970",
- *   "name" : "77777"
- *   }
- *  }
- * </pre>
- */
-@JsonIgnoreProperties(ignoreUnknown = true)
-
-public class AccessToken {
-  /**
-   * token expiration time
-   */
-  private String expires;
-  /**
-   * token id
-   */
-  private String id;
-  /**
-   * tenant name for whom id is attached
-   */
-  private Tenant tenant;
-
-  /**
-   * @return token expiration time
-   */
-  public String getExpires() {
-    return expires;
-  }
-
-  /**
-   * @param expires the token expiration time
-   */
-  public void setExpires(String expires) {
-    this.expires = expires;
-  }
-
-  /**
-   * @return token value
-   */
-  public String getId() {
-    return id;
-  }
-
-  /**
-   * @param id token value
-   */
-  public void setId(String id) {
-    this.id = id;
-  }
-
-  /**
-   * @return tenant authenticated in Openstack Keystone
-   */
-  public Tenant getTenant() {
-    return tenant;
-  }
-
-  /**
-   * @param tenant tenant authenticated in Openstack Keystone
-   */
-  public void setTenant(Tenant tenant) {
-    this.tenant = tenant;
-  }
-
-  @Override
-  public String toString() {
-    return "AccessToken{" +
-            "id='" + id + '\'' +
-            ", tenant=" + tenant +
-            ", expires='" + expires + '\'' +
-            '}';
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/Catalog.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/Catalog.java
deleted file mode 100644
index 76e161b0642..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/Catalog.java
+++ /dev/null
@@ -1,107 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.auth.entities;
-
-import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
-
-import java.util.List;
-
-/**
- * Describes Openstack Swift REST endpoints.
- * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
- * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
- */
-@JsonIgnoreProperties(ignoreUnknown = true)
-
-public class Catalog {
-  /**
-   * List of valid swift endpoints
-   */
-  private List<Endpoint> endpoints;
-  /**
-   * endpoint links are additional information description
-   * which aren't used in Hadoop and Swift integration scope
-   */
-  private List<Object> endpoints_links;
-  /**
-   * Openstack REST service name. In our case name = "keystone"
-   */
-  private String name;
-
-  /**
-   * Type of REST service. In our case type = "identity"
-   */
-  private String type;
-
-  /**
-   * @return List of endpoints
-   */
-  public List<Endpoint> getEndpoints() {
-    return endpoints;
-  }
-
-  /**
-   * @param endpoints list of endpoints
-   */
-  public void setEndpoints(List<Endpoint> endpoints) {
-    this.endpoints = endpoints;
-  }
-
-  /**
-   * @return list of endpoint links
-   */
-  public List<Object> getEndpoints_links() {
-    return endpoints_links;
-  }
-
-  /**
-   * @param endpoints_links list of endpoint links
-   */
-  public void setEndpoints_links(List<Object> endpoints_links) {
-    this.endpoints_links = endpoints_links;
-  }
-
-  /**
-   * @return name of Openstack REST service
-   */
-  public String getName() {
-    return name;
-  }
-
-  /**
-   * @param name of Openstack REST service
-   */
-  public void setName(String name) {
-    this.name = name;
-  }
-
-  /**
-   * @return type of Openstack REST service
-   */
-  public String getType() {
-    return type;
-  }
-
-  /**
-   * @param type of REST service
-   */
-  public void setType(String type) {
-    this.type = type;
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/Endpoint.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/Endpoint.java
deleted file mode 100644
index b1cbf2acc7b..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/Endpoint.java
+++ /dev/null
@@ -1,194 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.auth.entities;
-
-import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
-
-import java.net.URI;
-
-/**
- * Openstack Swift endpoint description.
- * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
- * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
- */
-@JsonIgnoreProperties(ignoreUnknown = true)
-
-public class Endpoint {
-
-  /**
-   * endpoint id
-   */
-  private String id;
-
-  /**
-   * Keystone admin URL
-   */
-  private URI adminURL;
-
-  /**
-   * Keystone internal URL
-   */
-  private URI internalURL;
-
-  /**
-   * public accessible URL
-   */
-  private URI publicURL;
-
-  /**
-   * public accessible URL#2
-   */
-  private URI publicURL2;
-
-  /**
-   * Openstack region name
-   */
-  private String region;
-
-  /**
-   * This field is used in RackSpace authentication model
-   */
-  private String tenantId;
-
-  /**
-   * This field user in RackSpace auth model
-   */
-  private String versionId;
-
-  /**
-   * This field user in RackSpace auth model
-   */
-  private String versionInfo;
-
-  /**
-   * This field user in RackSpace auth model
-   */
-  private String versionList;
-
-
-  /**
-   * @return endpoint id
-   */
-  public String getId() {
-    return id;
-  }
-
-  /**
-   * @param id endpoint id
-   */
-  public void setId(String id) {
-    this.id = id;
-  }
-
-  /**
-   * @return Keystone admin URL
-   */
-  public URI getAdminURL() {
-    return adminURL;
-  }
-
-  /**
-   * @param adminURL Keystone admin URL
-   */
-  public void setAdminURL(URI adminURL) {
-    this.adminURL = adminURL;
-  }
-
-  /**
-   * @return internal Keystone
-   */
-  public URI getInternalURL() {
-    return internalURL;
-  }
-
-  /**
-   * @param internalURL Keystone internal URL
-   */
-  public void setInternalURL(URI internalURL) {
-    this.internalURL = internalURL;
-  }
-
-  /**
-   * @return public accessible URL
-   */
-  public URI getPublicURL() {
-    return publicURL;
-  }
-
-  /**
-   * @param publicURL public URL
-   */
-  public void setPublicURL(URI publicURL) {
-    this.publicURL = publicURL;
-  }
-
-  public URI getPublicURL2() {
-    return publicURL2;
-  }
-
-  public void setPublicURL2(URI publicURL2) {
-    this.publicURL2 = publicURL2;
-  }
-
-  /**
-   * @return Openstack region name
-   */
-  public String getRegion() {
-    return region;
-  }
-
-  /**
-   * @param region Openstack region name
-   */
-  public void setRegion(String region) {
-    this.region = region;
-  }
-
-  public String getTenantId() {
-    return tenantId;
-  }
-
-  public void setTenantId(String tenantId) {
-    this.tenantId = tenantId;
-  }
-
-  public String getVersionId() {
-    return versionId;
-  }
-
-  public void setVersionId(String versionId) {
-    this.versionId = versionId;
-  }
-
-  public String getVersionInfo() {
-    return versionInfo;
-  }
-
-  public void setVersionInfo(String versionInfo) {
-    this.versionInfo = versionInfo;
-  }
-
-  public String getVersionList() {
-    return versionList;
-  }
-
-  public void setVersionList(String versionList) {
-    this.versionList = versionList;
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/Tenant.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/Tenant.java
deleted file mode 100644
index 405d2c85368..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/Tenant.java
+++ /dev/null
@@ -1,107 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.auth.entities;
-
-import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
-
-/**
- * Tenant is abstraction in Openstack which describes all account
- * information and user privileges in system.
- * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
- * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
- */
-@JsonIgnoreProperties(ignoreUnknown = true)
-public class Tenant {
-
-  /**
-   * tenant id
-   */
-  private String id;
-
-  /**
-   * tenant short description which Keystone returns
-   */
-  private String description;
-
-  /**
-   * boolean enabled user account or no
-   */
-  private boolean enabled;
-
-  /**
-   * tenant human readable name
-   */
-  private String name;
-
-  /**
-   * @return tenant name
-   */
-  public String getName() {
-    return name;
-  }
-
-  /**
-   * @param name tenant name
-   */
-  public void setName(String name) {
-    this.name = name;
-  }
-
-  /**
-   * @return true if account enabled and false otherwise
-   */
-  public boolean isEnabled() {
-    return enabled;
-  }
-
-  /**
-   * @param enabled enable or disable
-   */
-  public void setEnabled(boolean enabled) {
-    this.enabled = enabled;
-  }
-
-  /**
-   * @return account short description
-   */
-  public String getDescription() {
-    return description;
-  }
-
-  /**
-   * @param description set account description
-   */
-  public void setDescription(String description) {
-    this.description = description;
-  }
-
-  /**
-   * @return set tenant id
-   */
-  public String getId() {
-    return id;
-  }
-
-  /**
-   * @param id tenant id
-   */
-  public void setId(String id) {
-    this.id = id;
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/User.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/User.java
deleted file mode 100644
index da3bac20f2b..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/User.java
+++ /dev/null
@@ -1,132 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.auth.entities;
-
-import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
-import org.apache.hadoop.fs.swift.auth.Roles;
-
-import java.util.List;
-
-/**
- * Describes user entity in Keystone
- * In different Swift installations User is represented differently.
- * To avoid any JSON deserialization failures this entity is ignored.
- * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
- * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
- */
-@JsonIgnoreProperties(ignoreUnknown = true)
-public class User {
-
-  /**
-   * user id in Keystone
-   */
-  private String id;
-
-  /**
-   * user human readable name
-   */
-  private String name;
-
-  /**
-   * user roles in Keystone
-   */
-  private List<Roles> roles;
-
-  /**
-   * links to user roles
-   */
-  private List<Object> roles_links;
-
-  /**
-   * human readable username in Keystone
-   */
-  private String username;
-
-  /**
-   * @return user id
-   */
-  public String getId() {
-    return id;
-  }
-
-  /**
-   * @param id user id
-   */
-  public void setId(String id) {
-    this.id = id;
-  }
-
-
-  /**
-   * @return user name
-   */
-  public String getName() {
-    return name;
-  }
-
-
-  /**
-   * @param name user name
-   */
-  public void setName(String name) {
-    this.name = name;
-  }
-
-  /**
-   * @return user roles
-   */
-  public List<Roles> getRoles() {
-    return roles;
-  }
-
-  /**
-   * @param roles sets user roles
-   */
-  public void setRoles(List<Roles> roles) {
-    this.roles = roles;
-  }
-
-  /**
-   * @return user roles links
-   */
-  public List<Object> getRoles_links() {
-    return roles_links;
-  }
-
-  /**
-   * @param roles_links user roles links
-   */
-  public void setRoles_links(List<Object> roles_links) {
-    this.roles_links = roles_links;
-  }
-
-  /**
-   * @return username
-   */
-  public String getUsername() {
-    return username;
-  }
-
-  /**
-   * @param username human readable user name
-   */
-  public void setUsername(String username) {
-    this.username = username;
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftAuthenticationFailedException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftAuthenticationFailedException.java
deleted file mode 100644
index fdb9a3973ad..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftAuthenticationFailedException.java
+++ /dev/null
@@ -1,48 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.exceptions;
-
-import org.apache.http.HttpResponse;
-
-import java.net.URI;
-
-/**
- * An exception raised when an authentication request was rejected
- */
-public class SwiftAuthenticationFailedException extends SwiftInvalidResponseException {
-
-  public SwiftAuthenticationFailedException(String message,
-                                            int statusCode,
-                                            String operation,
-                                            URI uri) {
-    super(message, statusCode, operation, uri);
-  }
-
-  public SwiftAuthenticationFailedException(String message,
-                                            String operation,
-                                            URI uri,
-                                            HttpResponse resp) {
-    super(message, operation, uri, resp);
-  }
-
-  @Override
-  public String exceptionTitle() {
-    return "Authentication Failure";
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftBadRequestException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftBadRequestException.java
deleted file mode 100644
index f5b2abde0a9..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftBadRequestException.java
+++ /dev/null
@@ -1,49 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.exceptions;
-
-import org.apache.http.HttpResponse;
-
-import java.net.URI;
-
-/**
- * Thrown to indicate that data locality can't be calculated or requested path is incorrect.
- * Data locality can't be calculated if Openstack Swift version is old.
- */
-public class SwiftBadRequestException extends SwiftInvalidResponseException {
-
-  public SwiftBadRequestException(String message,
-                                  String operation,
-                                  URI uri,
-                                  HttpResponse resp) {
-    super(message, operation, uri, resp);
-  }
-
-  public SwiftBadRequestException(String message,
-                                  int statusCode,
-                                  String operation,
-                                  URI uri) {
-    super(message, statusCode, operation, uri);
-  }
-
-  @Override
-  public String exceptionTitle() {
-    return "BadRequest";
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftConfigurationException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftConfigurationException.java
deleted file mode 100644
index 3651f2e0505..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftConfigurationException.java
+++ /dev/null
@@ -1,33 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.exceptions;
-
-/**
- * Exception raised to indicate there is some problem with how the Swift FS
- * is configured
- */
-public class SwiftConfigurationException extends SwiftException {
-  public SwiftConfigurationException(String message) {
-    super(message);
-  }
-
-  public SwiftConfigurationException(String message, Throwable cause) {
-    super(message, cause);
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftConnectionClosedException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftConnectionClosedException.java
deleted file mode 100644
index eeaf8a5606f..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftConnectionClosedException.java
+++ /dev/null
@@ -1,36 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.fs.swift.exceptions;
-
-/**
- * Exception raised when an attempt is made to use a closed stream
- */
-public class SwiftConnectionClosedException extends SwiftException {
-
-  public static final String MESSAGE =
-    "Connection to Swift service has been closed";
-
-  public SwiftConnectionClosedException() {
-    super(MESSAGE);
-  }
-
-  public SwiftConnectionClosedException(String reason) {
-    super(MESSAGE + ": " + reason);
-  }
-
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftConnectionException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftConnectionException.java
deleted file mode 100644
index 74607b8915a..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftConnectionException.java
+++ /dev/null
@@ -1,35 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.exceptions;
-
-/**
- * Thrown to indicate that connection is lost or failed to be made
- */
-public class SwiftConnectionException extends SwiftException {
-  public SwiftConnectionException() {
-  }
-
-  public SwiftConnectionException(String message) {
-    super(message);
-  }
-
-  public SwiftConnectionException(String message, Throwable cause) {
-    super(message, cause);
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftException.java
deleted file mode 100644
index eba674fee5d..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftException.java
+++ /dev/null
@@ -1,43 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.exceptions;
-
-import java.io.IOException;
-
-/**
- * A Swift-specific exception -subclasses exist
- * for various specific problems.
- */
-public class SwiftException extends IOException {
-  public SwiftException() {
-    super();
-  }
-
-  public SwiftException(String message) {
-    super(message);
-  }
-
-  public SwiftException(String message, Throwable cause) {
-    super(message, cause);
-  }
-
-  public SwiftException(Throwable cause) {
-    super(cause);
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftInternalStateException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftInternalStateException.java
deleted file mode 100644
index 0f3e5d98849..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftInternalStateException.java
+++ /dev/null
@@ -1,38 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.exceptions;
-
-/**
- * The internal state of the Swift client is wrong -presumably a sign
- * of some bug
- */
-public class SwiftInternalStateException extends SwiftException {
-
-  public SwiftInternalStateException(String message) {
-    super(message);
-  }
-
-  public SwiftInternalStateException(String message, Throwable cause) {
-    super(message, cause);
-  }
-
-  public SwiftInternalStateException(Throwable cause) {
-    super(cause);
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftInvalidResponseException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftInvalidResponseException.java
deleted file mode 100644
index e90e57519b9..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftInvalidResponseException.java
+++ /dev/null
@@ -1,118 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.exceptions;
-
-import org.apache.hadoop.fs.swift.util.HttpResponseUtils;
-import org.apache.http.HttpResponse;
-
-import java.io.IOException;
-import java.net.URI;
-
-/**
- * Exception raised when the HTTP code is invalid. The status code,
- * method name and operation URI are all in the response.
- */
-public class SwiftInvalidResponseException extends SwiftConnectionException {
-
-  public final int statusCode;
-  public final String operation;
-  public final URI uri;
-  public final String body;
-
-  public SwiftInvalidResponseException(String message,
-                                       int statusCode,
-                                       String operation,
-                                       URI uri) {
-    super(message);
-    this.statusCode = statusCode;
-    this.operation = operation;
-    this.uri = uri;
-    this.body = "";
-  }
-
-  public SwiftInvalidResponseException(String message,
-                                       String operation,
-                                       URI uri,
-                                       HttpResponse resp) {
-    super(message);
-    this.statusCode = resp.getStatusLine().getStatusCode();
-    this.operation = operation;
-    this.uri = uri;
-    String bodyAsString;
-    try {
-      bodyAsString = HttpResponseUtils.getResponseBodyAsString(resp);
-      if (bodyAsString == null) {
-        bodyAsString = "";
-      }
-    } catch (IOException e) {
-      bodyAsString = "";
-    }
-    this.body = bodyAsString;
-  }
-
-  public int getStatusCode() {
-    return statusCode;
-  }
-
-  public String getOperation() {
-    return operation;
-  }
-
-  public URI getUri() {
-    return uri;
-  }
-
-  public String getBody() {
-    return body;
-  }
-
-  /**
-   * Override point: title of an exception -this is used in the
-   * toString() method.
-   * @return the new exception title
-   */
-  public String exceptionTitle() {
-    return "Invalid Response";
-  }
-
-  /**
-   * Build a description that includes the exception title, the URI,
-   * the message, the status code -and any body of the response
-   * @return the string value for display
-   */
-  @Override
-  public String toString() {
-    StringBuilder msg = new StringBuilder();
-    msg.append(exceptionTitle());
-    msg.append(": ");
-    msg.append(getMessage());
-    msg.append("  ");
-    msg.append(operation);
-    msg.append(" ");
-    msg.append(uri);
-    msg.append(" => ");
-    msg.append(statusCode);
-    if (body != null && !body.isEmpty()) {
-      msg.append(" : ");
-      msg.append(body);
-    }
-
-    return msg.toString();
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftJsonMarshallingException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftJsonMarshallingException.java
deleted file mode 100644
index 0b078d7f433..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftJsonMarshallingException.java
+++ /dev/null
@@ -1,33 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.exceptions;
-
-/**
- * Exception raised when the J/O mapping fails.
- */
-public class SwiftJsonMarshallingException extends SwiftException {
-
-  public SwiftJsonMarshallingException(String message) {
-    super(message);
-  }
-
-  public SwiftJsonMarshallingException(String message, Throwable cause) {
-    super(message, cause);
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftOperationFailedException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftOperationFailedException.java
deleted file mode 100644
index 8f78f70f44b..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftOperationFailedException.java
+++ /dev/null
@@ -1,35 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.exceptions;
-
-/**
- * Used to relay exceptions upstream from the inner implementation
- * to the public API, where it is downgraded to a log+failure.
- * Making it visible internally aids testing
- */
-public class SwiftOperationFailedException extends SwiftException {
-
-  public SwiftOperationFailedException(String message) {
-    super(message);
-  }
-
-  public SwiftOperationFailedException(String message, Throwable cause) {
-    super(message, cause);
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftThrottledRequestException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftThrottledRequestException.java
deleted file mode 100644
index 1e7ca67d1b0..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftThrottledRequestException.java
+++ /dev/null
@@ -1,37 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.exceptions;
-
-import org.apache.http.HttpResponse;
-
-import java.net.URI;
-
-/**
- * Exception raised if a Swift endpoint returned a HTTP response indicating
- * the caller is being throttled.
- */
-public class SwiftThrottledRequestException extends
-                                            SwiftInvalidResponseException {
-  public SwiftThrottledRequestException(String message,
-                                        String operation,
-                                        URI uri,
-                                        HttpResponse resp) {
-    super(message, operation, uri, resp);
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftUnsupportedFeatureException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftUnsupportedFeatureException.java
deleted file mode 100644
index b7e011c59ab..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftUnsupportedFeatureException.java
+++ /dev/null
@@ -1,30 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.exceptions;
-
-/**
- * Exception raised on an unsupported feature in the FS API -such as
- * <code>append()</code>
- */
-public class SwiftUnsupportedFeatureException extends SwiftException {
-
-  public SwiftUnsupportedFeatureException(String message) {
-    super(message);
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/CopyRequest.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/CopyRequest.java
deleted file mode 100644
index c25a630cc29..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/CopyRequest.java
+++ /dev/null
@@ -1,41 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.http;
-
-import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
-
-/**
- * Implementation for SwiftRestClient to make copy requests.
- * COPY is a method that came with WebDAV (RFC2518), and is not something that
- * can be handled by all proxies en-route to a filesystem.
- */
-class CopyRequest extends HttpEntityEnclosingRequestBase {
-
-  CopyRequest() {
-    super();
-  }
-
-  /**
-   * @return http method name
-   */
-  @Override
-  public String getMethod() {
-    return "COPY";
-  }
-}
\ No newline at end of file
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/ExceptionDiags.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/ExceptionDiags.java
deleted file mode 100644
index d159caa6690..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/ExceptionDiags.java
+++ /dev/null
@@ -1,98 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.http;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.IOException;
-import java.lang.reflect.Constructor;
-import java.net.ConnectException;
-import java.net.NoRouteToHostException;
-import java.net.SocketTimeoutException;
-import java.net.UnknownHostException;
-
-/**
- * Variant of Hadoop NetUtils exception wrapping with URI awareness and
- * available in branch-1 too.
- */
-public class ExceptionDiags {
-  private static final Logger LOG =
-      LoggerFactory.getLogger(ExceptionDiags.class);
-
-  /** text to point users elsewhere: {@value} */
-  private static final String FOR_MORE_DETAILS_SEE
-    = " For more details see:  ";
-  /** text included in wrapped exceptions if the host is null: {@value} */
-  public static final String UNKNOWN_HOST = "(unknown)";
-  /** Base URL of the Hadoop Wiki: {@value} */
-  public static final String HADOOP_WIKI = "http://wiki.apache.org/hadoop/";
-
-  /**
-   * Take an IOException and a URI, wrap it where possible with
-   * something that includes the URI
-   *
-   * @param dest target URI
-   * @param operation operation
-   * @param exception the caught exception.
-   * @return an exception to throw
-   */
-  public static IOException wrapException(final String dest,
-                                          final String operation,
-                                          final IOException exception) {
-    String action = operation + " " + dest;
-    String xref = null;
-
-    if (exception instanceof ConnectException) {
-      xref = "ConnectionRefused";
-    } else if (exception instanceof UnknownHostException) {
-      xref = "UnknownHost";
-    } else if (exception instanceof SocketTimeoutException) {
-      xref = "SocketTimeout";
-    } else if (exception instanceof NoRouteToHostException) {
-      xref = "NoRouteToHost";
-    }
-    String msg = action
-                 + " failed on exception: "
-                 + exception;
-    if (xref != null) {
-       msg = msg + ";" + see(xref);
-    }
-    return wrapWithMessage(exception, msg);
-  }
-
-  private static String see(final String entry) {
-    return FOR_MORE_DETAILS_SEE + HADOOP_WIKI + entry;
-  }
-
-  @SuppressWarnings("unchecked")
-  private static <T extends IOException> T wrapWithMessage(
-    T exception, String msg) {
-    Class<? extends Throwable> clazz = exception.getClass();
-    try {
-      Constructor<? extends Throwable> ctor =
-        clazz.getConstructor(String.class);
-      Throwable t = ctor.newInstance(msg);
-      return (T) (t.initCause(exception));
-    } catch (Throwable e) {
-      return exception;
-    }
-  }
-
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/HttpBodyContent.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/HttpBodyContent.java
deleted file mode 100644
index b471f218e57..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/HttpBodyContent.java
+++ /dev/null
@@ -1,45 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.fs.swift.http;
-
-/**
- * Response tuple from GET operations; combines the input stream with the content length
- */
-public class HttpBodyContent {
-  private final long contentLength;
-  private final HttpInputStreamWithRelease inputStream;
-
-  /**
-   * build a body response
-   * @param inputStream input stream from the operation
-   * @param contentLength length of content; may be -1 for "don't know"
-   */
-  public HttpBodyContent(HttpInputStreamWithRelease inputStream,
-                         long contentLength) {
-    this.contentLength = contentLength;
-    this.inputStream = inputStream;
-  }
-
-  public long getContentLength() {
-    return contentLength;
-  }
-
-  public HttpInputStreamWithRelease getInputStream() {
-    return inputStream;
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/HttpInputStreamWithRelease.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/HttpInputStreamWithRelease.java
deleted file mode 100644
index bd025aca1b8..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/HttpInputStreamWithRelease.java
+++ /dev/null
@@ -1,234 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.http;
-
-import org.apache.hadoop.fs.swift.exceptions.SwiftConnectionClosedException;
-import org.apache.hadoop.fs.swift.util.SwiftUtils;
-import org.apache.http.HttpResponse;
-import org.apache.http.client.methods.HttpRequestBase;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.ByteArrayInputStream;
-import java.io.EOFException;
-import java.io.IOException;
-import java.io.InputStream;
-import java.net.URI;
-
-/**
- * This replaces the input stream release class from JetS3t and AWS;
- * # Failures in the constructor are relayed up instead of simply logged.
- * # it is set up to be more robust at teardown
- * # release logic is thread safe
- * Note that the thread safety of the inner stream contains no thread
- * safety guarantees -this stream is not to be read across streams.
- * The thread safety logic here is to ensure that even if somebody ignores
- * that rule, the release code does not get entered twice -and that
- * any release in one thread is picked up by read operations in all others.
- */
-public class HttpInputStreamWithRelease extends InputStream {
-
-  private static final Logger LOG =
-      LoggerFactory.getLogger(HttpInputStreamWithRelease.class);
-  private final URI uri;
-  private HttpRequestBase req;
-  private HttpResponse resp;
-  //flag to say the stream is released -volatile so that read operations
-  //pick it up even while unsynchronized.
-  private volatile boolean released;
-  //volatile flag to verify that data is consumed.
-  private volatile boolean dataConsumed;
-  private InputStream inStream;
-  /**
-   * In debug builds, this is filled in with the construction-time
-   * stack, which is then included in logs from the finalize(), method.
-   */
-  private final Exception constructionStack;
-
-  /**
-   * Why the stream is closed
-   */
-  private String reasonClosed = "unopened";
-
-  public HttpInputStreamWithRelease(URI uri, HttpRequestBase req,
-      HttpResponse resp) throws IOException {
-    this.uri = uri;
-    this.req = req;
-    this.resp = resp;
-    constructionStack = LOG.isDebugEnabled() ? new Exception("stack") : null;
-    if (req == null) {
-      throw new IllegalArgumentException("Null 'request' parameter ");
-    }
-    try {
-      inStream = resp.getEntity().getContent();
-    } catch (IOException e) {
-      inStream = new ByteArrayInputStream(new byte[]{});
-      throw releaseAndRethrow("getResponseBodyAsStream() in constructor -" + e, e);
-    }
-  }
-
-  @Override
-  public void close() throws IOException {
-    release("close()", null);
-  }
-
-  /**
-   * Release logic
-   * @param reason reason for release (used in debug messages)
-   * @param ex exception that is a cause -null for non-exceptional releases
-   * @return true if the release took place here
-   * @throws IOException if the abort or close operations failed.
-   */
-  private synchronized boolean release(String reason, Exception ex) throws
-                                                                   IOException {
-    if (!released) {
-      reasonClosed = reason;
-      try {
-        LOG.debug("Releasing connection to {}:  {}", uri, reason, ex);
-        if (req != null) {
-          if (!dataConsumed) {
-            req.abort();
-          }
-          req.releaseConnection();
-        }
-        if (inStream != null) {
-          //this guard may seem un-needed, but a stack trace seen
-          //on the JetS3t predecessor implied that it
-          //is useful
-          inStream.close();
-        }
-        return true;
-      } finally {
-        //if something went wrong here, we do not want the release() operation
-        //to try and do anything in advance.
-        released = true;
-        dataConsumed = true;
-      }
-    } else {
-      return false;
-    }
-  }
-
-  /**
-   * Release the method, using the exception as a cause
-   * @param operation operation that failed
-   * @param ex the exception which triggered it.
-   * @return the exception to throw
-   */
-  private IOException releaseAndRethrow(String operation, IOException ex) {
-    try {
-      release(operation, ex);
-    } catch (IOException ioe) {
-      LOG.debug("Exception during release: {}", operation, ioe);
-      //make this the exception if there was none before
-      if (ex == null) {
-        ex = ioe;
-      }
-    }
-    return ex;
-  }
-
-  /**
-   * Assume that the connection is not released: throws an exception if it is
-   * @throws SwiftConnectionClosedException
-   */
-  private synchronized void assumeNotReleased() throws SwiftConnectionClosedException {
-    if (released || inStream == null) {
-      throw new SwiftConnectionClosedException(reasonClosed);
-    }
-  }
-
-  @Override
-  public int available() throws IOException {
-    assumeNotReleased();
-    try {
-      return inStream.available();
-    } catch (IOException e) {
-      throw releaseAndRethrow("available() failed -" + e, e);
-    }
-  }
-
-  @Override
-  public int read() throws IOException {
-    assumeNotReleased();
-    int read = 0;
-    try {
-      read = inStream.read();
-    } catch (EOFException e) {
-      LOG.debug("EOF exception", e);
-      read = -1;
-    } catch (IOException e) {
-      throw releaseAndRethrow("read()", e);
-    }
-    if (read < 0) {
-      dataConsumed = true;
-      release("read() -all data consumed", null);
-    }
-    return read;
-  }
-
-  @Override
-  public int read(byte[] b, int off, int len) throws IOException {
-    SwiftUtils.validateReadArgs(b, off, len);
-    if (len == 0) {
-      return 0;
-    }
-    //if the stream is already closed, then report an exception.
-    assumeNotReleased();
-    //now read in a buffer, reacting differently to different operations
-    int read;
-    try {
-      read = inStream.read(b, off, len);
-    } catch (EOFException e) {
-      LOG.debug("EOF exception", e);
-      read = -1;
-    } catch (IOException e) {
-      throw releaseAndRethrow("read(b, off, " + len + ")", e);
-    }
-    if (read < 0) {
-      dataConsumed = true;
-      release("read() -all data consumed", null);
-    }
-    return read;
-  }
-
-  /**
-   * Finalizer does release the stream, but also logs at WARN level
-   * including the URI at fault
-   */
-  @Override
-  protected void finalize() {
-    try {
-      if (release("finalize()", constructionStack)) {
-        LOG.warn("input stream of {}" +
-                 " not closed properly -cleaned up in finalize()", uri);
-      }
-    } catch (Exception e) {
-      //swallow anything that failed here
-      LOG.warn("Exception while releasing {} in finalizer", uri, e);
-    }
-  }
-
-  @Override
-  public String toString() {
-    return "HttpInputStreamWithRelease working with " + uri
-      +" released=" + released
-      +" dataConsumed=" + dataConsumed;
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/RestClientBindings.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/RestClientBindings.java
deleted file mode 100644
index f6917d3ffae..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/RestClientBindings.java
+++ /dev/null
@@ -1,225 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.http;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException;
-
-import java.net.URI;
-import java.util.Properties;
-
-import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.*;
-
-/**
- * This class implements the binding logic between Hadoop configurations
- * and the swift rest client.
- * <p>
- * The swift rest client takes a Properties instance containing
- * the string values it uses to bind to a swift endpoint.
- * <p>
- * This class extracts the values for a specific filesystem endpoint
- * and then builds an appropriate Properties file.
- */
-public final class RestClientBindings {
-  private static final Logger LOG =
-      LoggerFactory.getLogger(RestClientBindings.class);
-
-  public static final String E_INVALID_NAME = "Invalid swift hostname '%s':" +
-          " hostname must in form container.service";
-
-  /**
-   * Public for testing : build the full prefix for use in resolving
-   * configuration items
-   *
-   * @param service service to use
-   * @return the prefix string <i>without any trailing "."</i>
-   */
-  public static String buildSwiftInstancePrefix(String service) {
-    return SWIFT_SERVICE_PREFIX + service;
-  }
-
-  /**
-   * Raise an exception for an invalid service name
-   *
-   * @param hostname hostname that was being parsed
-   * @return an exception to throw
-   */
-  private static SwiftConfigurationException invalidName(String hostname) {
-    return new SwiftConfigurationException(
-            String.format(E_INVALID_NAME, hostname));
-  }
-
-  /**
-   * Get the container name from the hostname -the single element before the
-   * first "." in the hostname
-   *
-   * @param hostname hostname to split
-   * @return the container
-   * @throws SwiftConfigurationException
-   */
-  public static String extractContainerName(String hostname) throws
-          SwiftConfigurationException {
-    int i = hostname.indexOf(".");
-    if (i <= 0) {
-      throw invalidName(hostname);
-    }
-    return hostname.substring(0, i);
-  }
-
-  public static String extractContainerName(URI uri) throws
-          SwiftConfigurationException {
-    return extractContainerName(uri.getHost());
-  }
-
-  /**
-   * Get the service name from a longer hostname string
-   *
-   * @param hostname hostname
-   * @return the separated out service name
-   * @throws SwiftConfigurationException if the hostname was invalid
-   */
-  public static String extractServiceName(String hostname) throws
-          SwiftConfigurationException {
-    int i = hostname.indexOf(".");
-    if (i <= 0) {
-      throw invalidName(hostname);
-    }
-    String service = hostname.substring(i + 1);
-    if (service.isEmpty() || service.contains(".")) {
-      //empty service contains dots in -not currently supported
-      throw invalidName(hostname);
-    }
-    return service;
-  }
-
-  public static String extractServiceName(URI uri) throws
-          SwiftConfigurationException {
-    return extractServiceName(uri.getHost());
-  }
-
-  /**
-   * Build a properties instance bound to the configuration file -using
-   * the filesystem URI as the source of the information.
-   *
-   * @param fsURI filesystem URI
-   * @param conf  configuration
-   * @return a properties file with the instance-specific properties extracted
-   *         and bound to the swift client properties.
-   * @throws SwiftConfigurationException if the configuration is invalid
-   */
-  public static Properties bind(URI fsURI, Configuration conf) throws
-          SwiftConfigurationException {
-    String host = fsURI.getHost();
-    if (host == null || host.isEmpty()) {
-      //expect shortnames -> conf names
-      throw invalidName(host);
-    }
-
-    String container = extractContainerName(host);
-    String service = extractServiceName(host);
-
-    //build filename schema
-    String prefix = buildSwiftInstancePrefix(service);
-    if (LOG.isDebugEnabled()) {
-      LOG.debug("Filesystem " + fsURI
-              + " is using configuration keys " + prefix);
-    }
-    Properties props = new Properties();
-    props.setProperty(SWIFT_SERVICE_PROPERTY, service);
-    props.setProperty(SWIFT_CONTAINER_PROPERTY, container);
-    copy(conf, prefix + DOT_AUTH_URL, props, SWIFT_AUTH_PROPERTY, true);
-    copy(conf, prefix + DOT_USERNAME, props, SWIFT_USERNAME_PROPERTY, true);
-    copy(conf, prefix + DOT_APIKEY, props, SWIFT_APIKEY_PROPERTY, false);
-    copy(conf, prefix + DOT_PASSWORD, props, SWIFT_PASSWORD_PROPERTY,
-            props.contains(SWIFT_APIKEY_PROPERTY) ? true : false);
-    copy(conf, prefix + DOT_TENANT, props, SWIFT_TENANT_PROPERTY, false);
-    copy(conf, prefix + DOT_REGION, props, SWIFT_REGION_PROPERTY, false);
-    copy(conf, prefix + DOT_HTTP_PORT, props, SWIFT_HTTP_PORT_PROPERTY, false);
-    copy(conf, prefix +
-            DOT_HTTPS_PORT, props, SWIFT_HTTPS_PORT_PROPERTY, false);
-
-    copyBool(conf, prefix + DOT_PUBLIC, props, SWIFT_PUBLIC_PROPERTY, false);
-    copyBool(conf, prefix + DOT_LOCATION_AWARE, props,
-             SWIFT_LOCATION_AWARE_PROPERTY, false);
-
-    return props;
-  }
-
-  /**
-   * Extract a boolean value from the configuration and copy it to the
-   * properties instance.
-   * @param conf     source configuration
-   * @param confKey  key in the configuration file
-   * @param props    destination property set
-   * @param propsKey key in the property set
-   * @param defVal default value
-   */
-  private static void copyBool(Configuration conf,
-                               String confKey,
-                               Properties props,
-                               String propsKey,
-                               boolean defVal) {
-    boolean b = conf.getBoolean(confKey, defVal);
-    props.setProperty(propsKey, Boolean.toString(b));
-  }
-
-  private static void set(Properties props, String key, String optVal) {
-    if (optVal != null) {
-      props.setProperty(key, optVal);
-    }
-  }
-
-  /**
-   * Copy a (trimmed) property from the configuration file to the properties file.
-   * <p>
-   * If marked as required and not found in the configuration, an
-   * exception is raised.
-   * If not required -and missing- then the property will not be set.
-   * In this case, if the property is already in the Properties instance,
-   * it will remain untouched.
-   *
-   * @param conf     source configuration
-   * @param confKey  key in the configuration file
-   * @param props    destination property set
-   * @param propsKey key in the property set
-   * @param required is the property required
-   * @throws SwiftConfigurationException if the property is required but was
-   *                                     not found in the configuration instance.
-   */
-  public static void copy(Configuration conf, String confKey, Properties props,
-                          String propsKey,
-                          boolean required) throws SwiftConfigurationException {
-    //TODO: replace. version compatibility issue conf.getTrimmed fails with NoSuchMethodError
-    String val = conf.get(confKey);
-    if (val != null) {
-      val = val.trim();
-    }
-    if (required && val == null) {
-      throw new SwiftConfigurationException(
-              "Missing mandatory configuration option: "
-                      +
-                      confKey);
-    }
-    set(props, propsKey, val);
-  }
-
-
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftProtocolConstants.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftProtocolConstants.java
deleted file mode 100644
index a01f32c18b2..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftProtocolConstants.java
+++ /dev/null
@@ -1,270 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.http;
-
-import org.apache.hadoop.util.VersionInfo;
-
-/**
- * Constants used in the Swift REST protocol,
- * and in the properties used to configure the {@link SwiftRestClient}.
- */
-public class SwiftProtocolConstants {
-  /**
-   * Swift-specific header for authentication: {@value}
-   */
-  public static final String HEADER_AUTH_KEY = "X-Auth-Token";
-
-  /**
-   * Default port used by Swift for HTTP
-   */
-  public static final int SWIFT_HTTP_PORT = 8080;
-
-  /**
-   * Default port used by Swift Auth for HTTPS
-   */
-  public static final int SWIFT_HTTPS_PORT = 443;
-
-  /** HTTP standard {@value} header */
-  public static final String HEADER_RANGE = "Range";
-
-  /** HTTP standard {@value} header */
-  public static final String HEADER_DESTINATION = "Destination";
-
-  /** HTTP standard {@value} header */
-  public static final String HEADER_LAST_MODIFIED = "Last-Modified";
-
-  /** HTTP standard {@value} header */
-  public static final String HEADER_CONTENT_LENGTH = "Content-Length";
-
-  /** HTTP standard {@value} header */
-  public static final String HEADER_CONTENT_RANGE = "Content-Range";
-
-  /**
-   * Patten for range headers
-   */
-  public static final String SWIFT_RANGE_HEADER_FORMAT_PATTERN = "bytes=%d-%d";
-
-  /**
-   *  section in the JSON catalog provided after auth listing the swift FS:
-   *  {@value}
-   */
-  public static final String SERVICE_CATALOG_SWIFT = "swift";
-  /**
-   *  section in the JSON catalog provided after auth listing the cloud files;
-   * this is an alternate catalog entry name
-   *  {@value}
-   */
-  public static final String SERVICE_CATALOG_CLOUD_FILES = "cloudFiles";
-  /**
-   *  section in the JSON catalog provided after auth listing the object store;
-   * this is an alternate catalog entry name
-   *  {@value}
-   */
-  public static final String SERVICE_CATALOG_OBJECT_STORE = "object-store";
-
-  /**
-   * entry in the swift catalog defining the prefix used to talk to objects
-   *  {@value}
-   */
-  public static final String SWIFT_OBJECT_AUTH_ENDPOINT =
-          "/object_endpoint/";
-  /**
-   * Swift-specific header: object manifest used in the final upload
-   * of a multipart operation: {@value}
-   */
-  public static final String X_OBJECT_MANIFEST = "X-Object-Manifest";
-  /**
-   * Swift-specific header -#of objects in a container: {@value}
-   */
-  public static final String X_CONTAINER_OBJECT_COUNT =
-          "X-Container-Object-Count";
-  /**
-   * Swift-specific header: no. of bytes used in a container {@value}
-   */
-  public static final String X_CONTAINER_BYTES_USED = "X-Container-Bytes-Used";
-
-  /**
-   * Header to set when requesting the latest version of a file: : {@value}
-   */
-  public static final String X_NEWEST = "X-Newest";
-
-  /**
-   * throttled response sent by some endpoints.
-   */
-  public static final int SC_THROTTLED_498 = 498;
-  /**
-   * W3C recommended status code for throttled operations
-   */
-  public static final int SC_TOO_MANY_REQUESTS_429 = 429;
-
-  public static final String FS_SWIFT = "fs.swift";
-
-  /**
-   * Prefix for all instance-specific values in the configuration: {@value}
-   */
-  public static final String SWIFT_SERVICE_PREFIX = FS_SWIFT + ".service.";
-
-  /**
-   * timeout for all connections: {@value}
-   */
-  public static final String SWIFT_CONNECTION_TIMEOUT =
-          FS_SWIFT + ".connect.timeout";
-
-  /**
-   * timeout for all connections: {@value}
-   */
-  public static final String SWIFT_SOCKET_TIMEOUT =
-          FS_SWIFT + ".socket.timeout";
-
-  /**
-   * the default socket timeout in millis {@value}.
-   * This controls how long the connection waits for responses from
-   * servers.
-   */
-  public static final int DEFAULT_SOCKET_TIMEOUT = 60000;
-
-  /**
-   * connection retry count for all connections: {@value}
-   */
-  public static final String SWIFT_RETRY_COUNT =
-          FS_SWIFT + ".connect.retry.count";
-
-  /**
-   * delay in millis between bulk (delete, rename, copy operations: {@value}
-   */
-  public static final String SWIFT_THROTTLE_DELAY =
-          FS_SWIFT + ".connect.throttle.delay";
-
-  /**
-   * the default throttle delay in millis {@value}
-   */
-  public static final int DEFAULT_THROTTLE_DELAY = 0;
-
-  /**
-   * blocksize for all filesystems: {@value}
-   */
-  public static final String SWIFT_BLOCKSIZE =
-          FS_SWIFT + ".blocksize";
-
-  /**
-   * the default blocksize for filesystems in KB: {@value}
-   */
-  public static final int DEFAULT_SWIFT_BLOCKSIZE = 32 * 1024;
-
-  /**
-   * partition size for all filesystems in KB: {@value}
-   */
-  public static final String SWIFT_PARTITION_SIZE =
-    FS_SWIFT + ".partsize";
-
-  /**
-   * The default partition size for uploads: {@value}
-   */
-  public static final int DEFAULT_SWIFT_PARTITION_SIZE = 4608*1024;
-
-  /**
-   * request size for reads in KB: {@value}
-   */
-  public static final String SWIFT_REQUEST_SIZE =
-    FS_SWIFT + ".requestsize";
-
-  /**
-   * The default request size for reads: {@value}
-   */
-  public static final int DEFAULT_SWIFT_REQUEST_SIZE = 64;
-
-
-  public static final String HEADER_USER_AGENT="User-Agent";
-
-  /**
-   * The user agent sent in requests.
-   */
-  public static final String SWIFT_USER_AGENT= "Apache Hadoop Swift Client "
-                                               + VersionInfo.getBuildVersion();
-
-  /**
-   * Key for passing the service name as a property -not read from the
-   * configuration : {@value}
-   */
-  public static final String DOT_SERVICE = ".SERVICE-NAME";
-
-  /**
-   * Key for passing the container name as a property -not read from the
-   * configuration : {@value}
-   */
-  public static final String DOT_CONTAINER = ".CONTAINER-NAME";
-
-  public static final String DOT_AUTH_URL = ".auth.url";
-  public static final String DOT_TENANT = ".tenant";
-  public static final String DOT_USERNAME = ".username";
-  public static final String DOT_PASSWORD = ".password";
-  public static final String DOT_HTTP_PORT = ".http.port";
-  public static final String DOT_HTTPS_PORT = ".https.port";
-  public static final String DOT_REGION = ".region";
-  public static final String DOT_PROXY_HOST = ".proxy.host";
-  public static final String DOT_PROXY_PORT = ".proxy.port";
-  public static final String DOT_LOCATION_AWARE = ".location-aware";
-  public static final String DOT_APIKEY = ".apikey";
-  public static final String DOT_USE_APIKEY = ".useApikey";
-
-  /**
-   * flag to say use public URL
-   */
-  public static final String DOT_PUBLIC = ".public";
-
-  public static final String SWIFT_SERVICE_PROPERTY = FS_SWIFT + DOT_SERVICE;
-  public static final String SWIFT_CONTAINER_PROPERTY = FS_SWIFT + DOT_CONTAINER;
-
-  public static final String SWIFT_AUTH_PROPERTY = FS_SWIFT + DOT_AUTH_URL;
-  public static final String SWIFT_TENANT_PROPERTY = FS_SWIFT + DOT_TENANT;
-  public static final String SWIFT_USERNAME_PROPERTY = FS_SWIFT + DOT_USERNAME;
-  public static final String SWIFT_PASSWORD_PROPERTY = FS_SWIFT + DOT_PASSWORD;
-  public static final String SWIFT_APIKEY_PROPERTY = FS_SWIFT + DOT_APIKEY;
-  public static final String SWIFT_HTTP_PORT_PROPERTY = FS_SWIFT + DOT_HTTP_PORT;
-  public static final String SWIFT_HTTPS_PORT_PROPERTY = FS_SWIFT
-          + DOT_HTTPS_PORT;
-  public static final String SWIFT_REGION_PROPERTY = FS_SWIFT + DOT_REGION;
-  public static final String SWIFT_PUBLIC_PROPERTY = FS_SWIFT + DOT_PUBLIC;
-
-  public static final String SWIFT_USE_API_KEY_PROPERTY = FS_SWIFT + DOT_USE_APIKEY;
-
-  public static final String SWIFT_LOCATION_AWARE_PROPERTY = FS_SWIFT +
-            DOT_LOCATION_AWARE;
-
-  public static final String SWIFT_PROXY_HOST_PROPERTY = FS_SWIFT + DOT_PROXY_HOST;
-  public static final String SWIFT_PROXY_PORT_PROPERTY = FS_SWIFT + DOT_PROXY_PORT;
-  public static final String HTTP_ROUTE_DEFAULT_PROXY =
-    "http.route.default-proxy";
-  /**
-   * Topology to return when a block location is requested
-   */
-  public static final String TOPOLOGY_PATH = "/swift/unknown";
-  /**
-   * Block location to return when a block location is requested
-   */
-  public static final String BLOCK_LOCATION = "/default-rack/swift";
-  /**
-   * Default number of attempts to retry a connect request: {@value}
-   */
-  static final int DEFAULT_RETRY_COUNT = 3;
-  /**
-   * Default timeout in milliseconds for connection requests: {@value}
-   */
-  static final int DEFAULT_CONNECT_TIMEOUT = 15000;
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java
deleted file mode 100644
index cf6bf9b972a..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java
+++ /dev/null
@@ -1,1879 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.http;
-
-import org.apache.hadoop.fs.swift.util.HttpResponseUtils;
-import org.apache.http.Header;
-import org.apache.http.HttpHost;
-import org.apache.http.HttpResponse;
-import org.apache.http.HttpStatus;
-import org.apache.http.client.HttpClient;
-import org.apache.http.client.config.RequestConfig;
-import org.apache.http.client.methods.HttpDelete;
-import org.apache.http.client.methods.HttpGet;
-import org.apache.http.client.methods.HttpHead;
-import org.apache.http.client.methods.HttpPost;
-import org.apache.http.client.methods.HttpPut;
-import org.apache.http.client.methods.HttpRequestBase;
-import org.apache.http.client.methods.HttpUriRequest;
-import org.apache.http.config.SocketConfig;
-import org.apache.http.entity.ContentType;
-import org.apache.http.entity.InputStreamEntity;
-import org.apache.http.entity.StringEntity;
-import org.apache.http.impl.client.CloseableHttpClient;
-import org.apache.http.impl.client.DefaultHttpRequestRetryHandler;
-import org.apache.http.impl.client.HttpClientBuilder;
-import org.apache.http.message.BasicHeader;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.swift.auth.ApiKeyAuthenticationRequest;
-import org.apache.hadoop.fs.swift.auth.ApiKeyCredentials;
-import org.apache.hadoop.fs.swift.auth.AuthenticationRequest;
-import org.apache.hadoop.fs.swift.auth.AuthenticationRequestWrapper;
-import org.apache.hadoop.fs.swift.auth.AuthenticationResponse;
-import org.apache.hadoop.fs.swift.auth.AuthenticationWrapper;
-import org.apache.hadoop.fs.swift.auth.KeyStoneAuthRequest;
-import org.apache.hadoop.fs.swift.auth.KeystoneApiKeyCredentials;
-import org.apache.hadoop.fs.swift.auth.PasswordAuthenticationRequest;
-import org.apache.hadoop.fs.swift.auth.PasswordCredentials;
-import org.apache.hadoop.fs.swift.auth.entities.AccessToken;
-import org.apache.hadoop.fs.swift.auth.entities.Catalog;
-import org.apache.hadoop.fs.swift.auth.entities.Endpoint;
-import org.apache.hadoop.fs.swift.exceptions.SwiftAuthenticationFailedException;
-import org.apache.hadoop.fs.swift.exceptions.SwiftBadRequestException;
-import org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException;
-import org.apache.hadoop.fs.swift.exceptions.SwiftException;
-import org.apache.hadoop.fs.swift.exceptions.SwiftInternalStateException;
-import org.apache.hadoop.fs.swift.exceptions.SwiftInvalidResponseException;
-import org.apache.hadoop.fs.swift.exceptions.SwiftThrottledRequestException;
-import org.apache.hadoop.fs.swift.util.Duration;
-import org.apache.hadoop.fs.swift.util.DurationStats;
-import org.apache.hadoop.fs.swift.util.DurationStatsTable;
-import org.apache.hadoop.fs.swift.util.JSONUtil;
-import org.apache.hadoop.fs.swift.util.SwiftObjectPath;
-import org.apache.hadoop.fs.swift.util.SwiftUtils;
-
-import java.io.EOFException;
-import java.io.FileNotFoundException;
-import java.io.IOException;
-import java.io.InputStream;
-import java.io.UnsupportedEncodingException;
-import java.net.URI;
-import java.net.URISyntaxException;
-import java.net.URLEncoder;
-import java.util.List;
-import java.util.Properties;
-
-import static org.apache.http.HttpStatus.*;
-import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.*;
-
-/**
- * This implements the client-side of the Swift REST API.
- *
- * The core actions put, get and query data in the Swift object store,
- * after authenticating the client.
- *
- * <b>Logging:</b>
- *
- * Logging at DEBUG level displays detail about the actions of this
- * client, including HTTP requests and responses -excluding authentication
- * details.
- */
-public final class SwiftRestClient {
-  private static final Logger LOG =
-      LoggerFactory.getLogger(SwiftRestClient.class);
-
-  /**
-   * Header that says "use newest version" -ensures that
-   * the query doesn't pick up older versions served by
-   * an eventually consistent filesystem (except in the special case
-   * of a network partition, at which point no guarantees about
-   * consistency can be made.
-   */
-  public static final Header NEWEST =
-           new BasicHeader(SwiftProtocolConstants.X_NEWEST, "true");
-
-  /**
-   * the authentication endpoint as supplied in the configuration.
-   */
-  private final URI authUri;
-
-  /**
-   * Swift region. Some OpenStack installations has more than one region.
-   * In this case user can specify the region with which Hadoop will be working
-   */
-  private final String region;
-
-  /**
-   * tenant name.
-   */
-  private final String tenant;
-
-  /**
-   * username name.
-   */
-  private final String username;
-
-  /**
-   * user password.
-   */
-  private final String password;
-
-  /**
-   * user api key.
-   */
-  private final String apiKey;
-
-  /**
-   * The authentication request used to authenticate with Swift.
-   */
-  private final AuthenticationRequest authRequest;
-
-  /**
-   * This auth request is similar to @see authRequest,
-   * with one difference: it has another json representation when
-   * authRequest one is not applicable.
-   */
-  private AuthenticationRequest keystoneAuthRequest;
-
-  private boolean useKeystoneAuthentication = false;
-
-  /**
-   * The container this client is working with.
-   */
-  private final String container;
-  private final String serviceDescription;
-
-  /**
-   * Access token (Secret).
-   */
-  private AccessToken token;
-
-  /**
-   * Endpoint for swift operations, obtained after authentication.
-   */
-  private URI endpointURI;
-
-  /**
-   * URI under which objects can be found.
-   * This is set when the user is authenticated -the URI
-   * is returned in the body of the success response.
-   */
-  private URI objectLocationURI;
-
-  /**
-   * The name of the service provider.
-   */
-  private final String serviceProvider;
-
-  /**
-   * Should the public swift endpoint be used, rather than the in-cluster one?
-   */
-  private final boolean usePublicURL;
-
-  /**
-   * Number of times to retry a connection.
-   */
-  private final int retryCount;
-
-  /**
-   * How long (in milliseconds) should a connection be attempted.
-   */
-  private final int connectTimeout;
-
-  /**
-   * How long (in milliseconds) should a connection be attempted.
-   */
-  private final int socketTimeout;
-
-  /**
-   * How long (in milliseconds) between bulk operations.
-   */
-  private final int throttleDelay;
-
-  /**
-  * the name of a proxy host (can be null, in which case there is no proxy).
-   */
-  private String proxyHost;
-
-  /**
-   * The port of a proxy. This is ignored if {@link #proxyHost} is null.
-   */
-  private int proxyPort;
-
-  /**
-   * Flag to indicate whether or not the client should
-   * query for file location data.
-   */
-  private final boolean locationAware;
-
-  private final int partSizeKB;
-  /**
-   * The blocksize of this FS
-   */
-  private final int blocksizeKB;
-  private final int bufferSizeKB;
-
-  private final DurationStatsTable durationStats = new DurationStatsTable();
-  /**
-   * objects query endpoint. This is synchronized
-   * to handle a simultaneous update of all auth data in one
-   * go.
-   */
-  private synchronized URI getEndpointURI() {
-    return endpointURI;
-  }
-
-  /**
-   * token for Swift communication.
-   */
-  private synchronized AccessToken getToken() {
-    return token;
-  }
-
-  /**
-   * Setter of authentication and endpoint details.
-   * Being synchronized guarantees that all three fields are set up together.
-   * It is up to the reader to read all three fields in their own
-   * synchronized block to be sure that they are all consistent.
-   *
-   * @param endpoint endpoint URI
-   * @param objectLocation object location URI
-   * @param authToken auth token
-   */
-  private void setAuthDetails(URI endpoint,
-                              URI objectLocation,
-                              AccessToken authToken) {
-    if (LOG.isDebugEnabled()) {
-      LOG.debug(String.format("setAuth: endpoint=%s; objectURI=%s; token=%s",
-              endpoint, objectLocation, authToken));
-    }
-    synchronized (this) {
-      endpointURI = endpoint;
-      objectLocationURI = objectLocation;
-      token = authToken;
-    }
-  }
-
-
-  /**
-   * Base class for all Swift REST operations.
-   *
-   * @param <M> request
-   * @param <R> result
-   */
-  private static abstract class HttpRequestProcessor
-      <M extends HttpUriRequest, R> {
-    public final M createRequest(String uri) throws IOException {
-      final M req = doCreateRequest(uri);
-      setup(req);
-      return req;
-    }
-
-    /**
-     * Override it to return some result after request is executed.
-     */
-    public abstract R extractResult(M req, HttpResponse resp)
-        throws IOException;
-
-    /**
-     * Factory method to create a REST method against the given URI.
-     *
-     * @param uri target
-     * @return method to invoke
-     */
-    protected abstract M doCreateRequest(String uri) throws IOException;
-
-    /**
-     * Override port to set up the request before it is executed.
-     */
-    protected void setup(M req) throws IOException {
-    }
-
-    /**
-     * Override point: what are the status codes that this operation supports?
-     *
-     * @return an array with the permitted status code(s)
-     */
-    protected int[] getAllowedStatusCodes() {
-      return new int[]{
-              SC_OK,
-              SC_CREATED,
-              SC_ACCEPTED,
-              SC_NO_CONTENT,
-              SC_PARTIAL_CONTENT,
-      };
-    }
-  }
-
-  private static abstract class GetRequestProcessor<R>
-      extends HttpRequestProcessor<HttpGet, R> {
-    @Override
-    protected final HttpGet doCreateRequest(String uri) {
-      return new HttpGet(uri);
-    }
-  }
-
-  private static abstract class PostRequestProcessor<R>
-      extends HttpRequestProcessor<HttpPost, R> {
-    @Override
-    protected final HttpPost doCreateRequest(String uri) {
-      return new HttpPost(uri);
-    }
-  }
-
-  /**
-   * There's a special type for auth messages, so that low-level
-   * message handlers can react to auth failures differently from everything
-   * else.
-   */
-  private static final class AuthPostRequest extends HttpPost {
-    private AuthPostRequest(String uri) {
-      super(uri);
-    }
-  }
-
-  /**
-   * Generate an auth message.
-   * @param <R> response
-   */
-  private static abstract class AuthRequestProcessor<R>
-      extends HttpRequestProcessor<AuthPostRequest, R> {
-    @Override
-    protected final AuthPostRequest doCreateRequest(String uri) {
-      return new AuthPostRequest(uri);
-    }
-  }
-
-  private static abstract class PutRequestProcessor<R>
-      extends HttpRequestProcessor<HttpPut, R> {
-    @Override
-    protected final HttpPut doCreateRequest(String uri) {
-      return new HttpPut(uri);
-    }
-
-    /**
-     * Override point: what are the status codes that this operation supports?
-     *
-     * @return the list of status codes to accept
-     */
-    @Override
-    protected int[] getAllowedStatusCodes() {
-      return new int[]{
-              SC_OK,
-              SC_CREATED,
-              SC_NO_CONTENT,
-              SC_ACCEPTED,
-      };
-    }
-  }
-
-  /**
-   * Create operation.
-   *
-   * @param <R> result type
-   */
-  private static abstract class CopyRequestProcessor<R>
-      extends HttpRequestProcessor<CopyRequest, R> {
-    @Override
-    protected final CopyRequest doCreateRequest(String uri)
-        throws SwiftException {
-      CopyRequest copy = new CopyRequest();
-      try {
-        copy.setURI(new URI(uri));
-      } catch (URISyntaxException e) {
-        throw new SwiftException("Failed to create URI from: " + uri);
-      }
-      return copy;
-    }
-
-    /**
-     * The only allowed status code is 201:created.
-     * @return an array with the permitted status code(s)
-     */
-    @Override
-    protected int[] getAllowedStatusCodes() {
-      return new int[]{
-              SC_CREATED
-      };
-    }
-  }
-
-  /**
-   * Delete operation.
-   *
-   * @param <R>
-   */
-  private static abstract class DeleteRequestProcessor<R>
-      extends HttpRequestProcessor<HttpDelete, R> {
-    @Override
-    protected final HttpDelete doCreateRequest(String uri) {
-      return new HttpDelete(uri);
-    }
-
-    @Override
-    protected int[] getAllowedStatusCodes() {
-      return new int[]{
-              SC_OK,
-              SC_ACCEPTED,
-              SC_NO_CONTENT,
-              SC_NOT_FOUND
-      };
-    }
-  }
-
-  private static abstract class HeadRequestProcessor<R>
-      extends HttpRequestProcessor<HttpHead, R> {
-    @Override
-    protected final HttpHead doCreateRequest(String uri) {
-      return new HttpHead(uri);
-    }
-  }
-
-
-  /**
-   * Create a Swift Rest Client instance.
-   *
-   * @param filesystemURI filesystem URI
-   * @param conf The configuration to use to extract the binding
-   * @throws SwiftConfigurationException the configuration is not valid for
-   * defining a rest client against the service
-   */
-  private SwiftRestClient(URI filesystemURI,
-                          Configuration conf)
-      throws SwiftConfigurationException {
-    Properties props = RestClientBindings.bind(filesystemURI, conf);
-    String stringAuthUri = getOption(props, SWIFT_AUTH_PROPERTY);
-    username = getOption(props, SWIFT_USERNAME_PROPERTY);
-    password = props.getProperty(SWIFT_PASSWORD_PROPERTY);
-    apiKey = props.getProperty(SWIFT_APIKEY_PROPERTY);
-    //optional
-    region = props.getProperty(SWIFT_REGION_PROPERTY);
-    //tenant is optional
-    tenant = props.getProperty(SWIFT_TENANT_PROPERTY);
-    //service is used for diagnostics
-    serviceProvider = props.getProperty(SWIFT_SERVICE_PROPERTY);
-    container = props.getProperty(SWIFT_CONTAINER_PROPERTY);
-    String isPubProp = props.getProperty(SWIFT_PUBLIC_PROPERTY, "false");
-    usePublicURL = "true".equals(isPubProp);
-
-        if (apiKey == null && password == null) {
-            throw new SwiftConfigurationException(
-                    "Configuration for " + filesystemURI +" must contain either "
-                            + SWIFT_PASSWORD_PROPERTY + " or "
-                            + SWIFT_APIKEY_PROPERTY);
-        }
-        //create the (reusable) authentication request
-        if (password != null) {
-            authRequest = new PasswordAuthenticationRequest(tenant,
-                    new PasswordCredentials(
-                            username,
-                            password));
-        } else {
-            authRequest = new ApiKeyAuthenticationRequest(tenant,
-                    new ApiKeyCredentials(
-                            username, apiKey));
-            keystoneAuthRequest = new KeyStoneAuthRequest(tenant,
-                    new KeystoneApiKeyCredentials(username, apiKey));
-    }
-    locationAware = "true".equals(
-      props.getProperty(SWIFT_LOCATION_AWARE_PROPERTY, "false"));
-
-    //now read in properties that are shared across all connections
-
-    //connection and retries
-    try {
-      retryCount = conf.getInt(SWIFT_RETRY_COUNT, DEFAULT_RETRY_COUNT);
-      connectTimeout = conf.getInt(SWIFT_CONNECTION_TIMEOUT,
-                                   DEFAULT_CONNECT_TIMEOUT);
-      socketTimeout = conf.getInt(SWIFT_SOCKET_TIMEOUT,
-                                   DEFAULT_SOCKET_TIMEOUT);
-
-      throttleDelay = conf.getInt(SWIFT_THROTTLE_DELAY,
-                                  DEFAULT_THROTTLE_DELAY);
-
-      //proxy options
-      proxyHost = conf.get(SWIFT_PROXY_HOST_PROPERTY);
-      proxyPort = conf.getInt(SWIFT_PROXY_PORT_PROPERTY, 8080);
-
-      blocksizeKB = conf.getInt(SWIFT_BLOCKSIZE,
-                                DEFAULT_SWIFT_BLOCKSIZE);
-      if (blocksizeKB <= 0) {
-        throw new SwiftConfigurationException("Invalid blocksize set in "
-                          + SWIFT_BLOCKSIZE
-                          + ": " + blocksizeKB);
-      }
-      partSizeKB = conf.getInt(SWIFT_PARTITION_SIZE,
-                               DEFAULT_SWIFT_PARTITION_SIZE);
-      if (partSizeKB <=0) {
-        throw new SwiftConfigurationException("Invalid partition size set in "
-                                              + SWIFT_PARTITION_SIZE
-                                              + ": " + partSizeKB);
-      }
-
-      bufferSizeKB = conf.getInt(SWIFT_REQUEST_SIZE,
-                                 DEFAULT_SWIFT_REQUEST_SIZE);
-      if (bufferSizeKB <=0) {
-        throw new SwiftConfigurationException("Invalid buffer size set in "
-                          + SWIFT_REQUEST_SIZE
-                          + ": " + bufferSizeKB);
-      }
-    } catch (NumberFormatException e) {
-      //convert exceptions raised parsing ints and longs into
-      // SwiftConfigurationException instances
-      throw new SwiftConfigurationException(e.toString(), e);
-    }
-    //everything you need for diagnostics. The password is omitted.
-    serviceDescription = String.format(
-      "Service={%s} container={%s} uri={%s}"
-      + " tenant={%s} user={%s} region={%s}"
-      + " publicURL={%b}"
-      + " location aware={%b}"
-      + " partition size={%d KB}, buffer size={%d KB}"
-      + " block size={%d KB}"
-      + " connect timeout={%d}, retry count={%d}"
-      + " socket timeout={%d}"
-      + " throttle delay={%d}"
-      ,
-      serviceProvider,
-      container,
-      stringAuthUri,
-      tenant,
-      username,
-      region != null ? region : "(none)",
-      usePublicURL,
-      locationAware,
-      partSizeKB,
-      bufferSizeKB,
-      blocksizeKB,
-      connectTimeout,
-      retryCount,
-      socketTimeout,
-      throttleDelay
-      );
-    if (LOG.isDebugEnabled()) {
-      LOG.debug(serviceDescription);
-    }
-    try {
-      this.authUri = new URI(stringAuthUri);
-    } catch (URISyntaxException e) {
-      throw new SwiftConfigurationException("The " + SWIFT_AUTH_PROPERTY
-              + " property was incorrect: "
-              + stringAuthUri, e);
-    }
-  }
-
-  /**
-   * Get a mandatory configuration option.
-   *
-   * @param props property set
-   * @param key   key
-   * @return value of the configuration
-   * @throws SwiftConfigurationException if there was no match for the key
-   */
-  private static String getOption(Properties props, String key) throws
-          SwiftConfigurationException {
-    String val = props.getProperty(key);
-    if (val == null) {
-      throw new SwiftConfigurationException("Undefined property: " + key);
-    }
-    return val;
-  }
-
-  /**
-   * Make an HTTP GET request to Swift to get a range of data in the object.
-   *
-   * @param path   path to object
-   * @param offset offset from file beginning
-   * @param length file length
-   * @return The input stream -which must be closed afterwards.
-   * @throws IOException Problems
-   * @throws SwiftException swift specific error
-   * @throws FileNotFoundException path is not there
-   */
-  public HttpBodyContent getData(SwiftObjectPath path,
-                                 long offset,
-                                 long length) throws IOException {
-    if (offset < 0) {
-      throw new SwiftException("Invalid offset: " + offset
-                            + " in getDataAsInputStream( path=" + path
-                            + ", offset=" + offset
-                            + ", length =" + length + ")");
-    }
-    if (length <= 0) {
-      throw new SwiftException("Invalid length: " + length
-                + " in getDataAsInputStream( path="+ path
-                            + ", offset=" + offset
-                            + ", length ="+ length + ")");
-    }
-
-    final String range = String.format(SWIFT_RANGE_HEADER_FORMAT_PATTERN,
-            offset,
-            offset + length - 1);
-    if (LOG.isDebugEnabled()) {
-      LOG.debug("getData:" + range);
-    }
-
-    return getData(path,
-                   new BasicHeader(HEADER_RANGE, range),
-                   SwiftRestClient.NEWEST);
-  }
-
-  /**
-   * Returns object length.
-   *
-   * @param uri file URI
-   * @return object length
-   * @throws SwiftException on swift-related issues
-   * @throws IOException on network/IO problems
-   */
-  public long getContentLength(URI uri) throws IOException {
-    preRemoteCommand("getContentLength");
-    return perform("getContentLength", uri, new HeadRequestProcessor<Long>() {
-      @Override
-      public Long extractResult(HttpHead req, HttpResponse resp)
-          throws IOException {
-        return HttpResponseUtils.getContentLength(resp);
-      }
-
-      @Override
-      protected void setup(HttpHead req) throws IOException {
-        super.setup(req);
-        req.addHeader(NEWEST);
-      }
-    });
-  }
-
-  /**
-   * Get the length of the remote object.
-   * @param path object to probe
-   * @return the content length
-   * @throws IOException on any failure
-   */
-  public long getContentLength(SwiftObjectPath path) throws IOException {
-    return getContentLength(pathToURI(path));
-  }
-
-  /**
-   * Get the path contents as an input stream.
-   * <b>Warning:</b> this input stream must be closed to avoid
-   * keeping Http connections open.
-   *
-   * @param path path to file
-   * @param requestHeaders http headers
-   * @return byte[] file data or null if the object was not found
-   * @throws IOException on IO Faults
-   * @throws FileNotFoundException if there is nothing at the path
-   */
-  public HttpBodyContent getData(SwiftObjectPath path,
-                                 final Header... requestHeaders)
-          throws IOException {
-    preRemoteCommand("getData");
-    return doGet(pathToURI(path),
-            requestHeaders);
-  }
-
-  /**
-   * Returns object location as byte[].
-   *
-   * @param path path to file
-   * @param requestHeaders http headers
-   * @return byte[] file data or null if the object was not found
-   * @throws IOException on IO Faults
-   */
-  public byte[] getObjectLocation(SwiftObjectPath path,
-                                  final Header... requestHeaders) throws IOException {
-    if (!isLocationAware()) {
-      //if the filesystem is not location aware, do not ask for this information
-      return null;
-    }
-    preRemoteCommand("getObjectLocation");
-    try {
-      return perform("getObjectLocation", pathToObjectLocation(path),
-          new GetRequestProcessor<byte[]>() {
-            @Override
-            protected int[] getAllowedStatusCodes() {
-              return new int[]{
-                  SC_OK,
-                  SC_FORBIDDEN,
-                  SC_NO_CONTENT
-              };
-            }
-
-            @Override
-            public byte[] extractResult(HttpGet req, HttpResponse resp) throws
-                IOException {
-
-              //TODO: remove SC_NO_CONTENT if it depends on Swift versions
-              int statusCode = resp.getStatusLine().getStatusCode();
-              if (statusCode == SC_NOT_FOUND
-                  || statusCode == SC_FORBIDDEN
-                  || statusCode == SC_NO_CONTENT
-                  || resp.getEntity().getContent() == null) {
-                return null;
-              }
-              final InputStream responseBodyAsStream =
-                  resp.getEntity().getContent();
-              final byte[] locationData = new byte[1024];
-
-              return responseBodyAsStream.read(locationData) > 0 ?
-                  locationData : null;
-            }
-
-            @Override
-            protected void setup(HttpGet req)
-                throws SwiftInternalStateException {
-              setHeaders(req, requestHeaders);
-            }
-          });
-    } catch (IOException e) {
-      LOG.warn("Failed to get the location of " + path + ": " + e, e);
-      return null;
-    }
-  }
-
-  /**
-   * Create the URI needed to query the location of an object.
-   * @param path object path to retrieve information about
-   * @return the URI for the location operation
-   * @throws SwiftException if the URI could not be constructed
-   */
-  private URI pathToObjectLocation(SwiftObjectPath path) throws SwiftException {
-    URI uri;
-    String dataLocationURI = objectLocationURI.toString();
-    try {
-      if (path.toString().startsWith("/")) {
-        dataLocationURI = dataLocationURI.concat(path.toUriPath());
-      } else {
-        dataLocationURI = dataLocationURI.concat("/").concat(path.toUriPath());
-      }
-
-      uri = new URI(dataLocationURI);
-    } catch (URISyntaxException e) {
-      throw new SwiftException(e);
-    }
-    return uri;
-  }
-
-  /**
-   * Find objects under a prefix.
-   *
-   * @param path path prefix
-   * @param requestHeaders optional request headers
-   * @return byte[] file data or null if the object was not found
-   * @throws IOException on IO Faults
-   * @throws FileNotFoundException if nothing is at the end of the URI -that is,
-   * the directory is empty
-   */
-  public byte[] findObjectsByPrefix(SwiftObjectPath path,
-                          final Header... requestHeaders) throws IOException {
-    preRemoteCommand("findObjectsByPrefix");
-    URI uri;
-    String dataLocationURI = getEndpointURI().toString();
-    try {
-      String object = path.getObject();
-      if (object.startsWith("/")) {
-        object = object.substring(1);
-      }
-      object = encodeUrl(object);
-      dataLocationURI = dataLocationURI.concat("/")
-              .concat(path.getContainer())
-              .concat("/?prefix=")
-              .concat(object)
-      ;
-      uri = new URI(dataLocationURI);
-    } catch (URISyntaxException e) {
-      throw new SwiftException("Bad URI: " + dataLocationURI, e);
-    }
-
-    return perform("findObjectsByPrefix", uri,
-        new GetRequestProcessor<byte[]>() {
-          @Override
-          public byte[] extractResult(HttpGet req, HttpResponse resp)
-              throws IOException {
-            if (resp.getStatusLine().getStatusCode() == SC_NOT_FOUND) {
-              //no result
-              throw new FileNotFoundException("Not found " + req.getURI());
-            }
-            return HttpResponseUtils.getResponseBody(resp);
-          }
-
-          @Override
-          protected int[] getAllowedStatusCodes() {
-            return new int[]{
-                SC_OK,
-                SC_NOT_FOUND
-            };
-          }
-
-          @Override
-          protected void setup(HttpGet req) throws SwiftInternalStateException {
-            setHeaders(req, requestHeaders);
-          }
-        });
-  }
-
-  /**
-   * Find objects in a directory.
-   *
-   * @param path path prefix
-   * @param requestHeaders optional request headers
-   * @return byte[] file data or null if the object was not found
-   * @throws IOException on IO Faults
-   * @throws FileNotFoundException if nothing is at the end of the URI -that is,
-   * the directory is empty
-   */
-  public byte[] listDeepObjectsInDirectory(SwiftObjectPath path,
-                                           boolean listDeep,
-                                       final Header... requestHeaders)
-          throws IOException {
-    preRemoteCommand("listDeepObjectsInDirectory");
-
-    String endpoint = getEndpointURI().toString();
-    StringBuilder dataLocationURI = new StringBuilder();
-    dataLocationURI.append(endpoint);
-    String object = path.getObject();
-    if (object.startsWith("/")) {
-      object = object.substring(1);
-    }
-    if (!object.endsWith("/")) {
-      object = object.concat("/");
-    }
-
-    if (object.equals("/")) {
-      object = "";
-    }
-
-    dataLocationURI = dataLocationURI.append("/")
-            .append(path.getContainer())
-            .append("/?prefix=")
-            .append(object)
-            .append("&format=json");
-
-    //in listing deep set param to false
-    if (listDeep == false) {
-        dataLocationURI.append("&delimiter=/");
-    }
-
-    return findObjects(dataLocationURI.toString(), requestHeaders);
-  }
-
-  /**
-   * Find objects in a location.
-   * @param location URI
-   * @param requestHeaders optional request headers
-   * @return the body of te response
-   * @throws IOException IO problems
-   */
-  private byte[] findObjects(String location, final Header[] requestHeaders)
-      throws IOException {
-    URI uri;
-    preRemoteCommand("findObjects");
-    try {
-      uri = new URI(location);
-    } catch (URISyntaxException e) {
-      throw new SwiftException("Bad URI: " + location, e);
-    }
-
-    return perform("findObjects", uri,
-        new GetRequestProcessor<byte[]>() {
-          @Override
-          public byte[] extractResult(HttpGet req, HttpResponse resp)
-              throws IOException {
-            if (resp.getStatusLine().getStatusCode() == SC_NOT_FOUND) {
-              //no result
-              throw new FileNotFoundException("Not found " + req.getURI());
-            }
-            return HttpResponseUtils.getResponseBody(resp);
-          }
-
-          @Override
-          protected int[] getAllowedStatusCodes() {
-            return new int[]{
-                SC_OK,
-                SC_NOT_FOUND
-            };
-          }
-
-          @Override
-          protected void setup(HttpGet req) throws SwiftInternalStateException {
-            setHeaders(req, requestHeaders);
-          }
-        });
-  }
-
-  /**
-   * Copy an object. This is done by sending a COPY method to the filesystem
-   * which is required to handle this WebDAV-level extension to the
-   * base HTTP operations.
-   *
-   * @param src source path
-   * @param dst destination path
-   * @param headers any headers
-   * @return true if the status code was considered successful
-   * @throws IOException on IO Faults
-   */
-  public boolean copyObject(SwiftObjectPath src, final SwiftObjectPath dst,
-                            final Header... headers) throws IOException {
-
-    preRemoteCommand("copyObject");
-
-    return perform("copy", pathToURI(src),
-        new CopyRequestProcessor<Boolean>() {
-          @Override
-          public Boolean extractResult(CopyRequest req, HttpResponse resp)
-              throws IOException {
-            return resp.getStatusLine().getStatusCode() != SC_NOT_FOUND;
-          }
-
-          @Override
-          protected void setup(CopyRequest req) throws
-              SwiftInternalStateException {
-            setHeaders(req, headers);
-            req.addHeader(HEADER_DESTINATION, dst.toUriPath());
-          }
-        });
-  }
-
-  /**
-   * Uploads file as Input Stream to Swift.
-   * The data stream will be closed after the request.
-   *
-   * @param path path to Swift
-   * @param data object data
-   * @param length length of data
-   * @param requestHeaders http headers
-   * @throws IOException on IO Faults
-   */
-  public void upload(SwiftObjectPath path,
-                     final InputStream data,
-                     final long length,
-                     final Header... requestHeaders)
-          throws IOException {
-    preRemoteCommand("upload");
-
-    try {
-      perform("upload", pathToURI(path), new PutRequestProcessor<byte[]>() {
-        @Override
-        public byte[] extractResult(HttpPut req, HttpResponse resp)
-            throws IOException {
-          return HttpResponseUtils.getResponseBody(resp);
-        }
-
-        @Override
-        protected void setup(HttpPut req) throws
-                        SwiftInternalStateException {
-          req.setEntity(new InputStreamEntity(data, length));
-          setHeaders(req, requestHeaders);
-        }
-      });
-    } finally {
-      data.close();
-    }
-
-  }
-
-
-  /**
-   * Deletes object from swift.
-   * The result is true if this operation did the deletion.
-   *
-   * @param path           path to file
-   * @param requestHeaders http headers
-   * @throws IOException on IO Faults
-   */
-  public boolean delete(SwiftObjectPath path, final Header... requestHeaders) throws IOException {
-    preRemoteCommand("delete");
-
-    return perform("", pathToURI(path), new DeleteRequestProcessor<Boolean>() {
-      @Override
-      public Boolean extractResult(HttpDelete req, HttpResponse resp)
-          throws IOException {
-        return resp.getStatusLine().getStatusCode() == SC_NO_CONTENT;
-      }
-
-      @Override
-      protected void setup(HttpDelete req) throws
-                    SwiftInternalStateException {
-        setHeaders(req, requestHeaders);
-      }
-    });
-  }
-
-  /**
-   * Issue a head request.
-   * @param reason reason -used in logs
-   * @param path path to query
-   * @param requestHeaders request header
-   * @return the response headers. This may be an empty list
-   * @throws IOException IO problems
-   * @throws FileNotFoundException if there is nothing at the end
-   */
-  public Header[] headRequest(String reason,
-                              SwiftObjectPath path,
-                              final Header... requestHeaders)
-          throws IOException {
-
-    preRemoteCommand("headRequest: "+ reason);
-    return perform(reason, pathToURI(path),
-        new HeadRequestProcessor<Header[]>() {
-          @Override
-          public Header[] extractResult(HttpHead req, HttpResponse resp)
-              throws IOException {
-            if (resp.getStatusLine().getStatusCode() == SC_NOT_FOUND) {
-              throw new FileNotFoundException("Not Found " + req.getURI());
-            }
-            return resp.getAllHeaders();
-          }
-
-          @Override
-          protected void setup(HttpHead req) throws
-              SwiftInternalStateException {
-            setHeaders(req, requestHeaders);
-          }
-        });
-  }
-
-  /**
-   * Issue a put request.
-   * @param path path
-   * @param requestHeaders optional headers
-   * @return the HTTP response
-   * @throws IOException any problem
-   */
-  public int putRequest(SwiftObjectPath path, final Header... requestHeaders)
-          throws IOException {
-
-    preRemoteCommand("putRequest");
-    return perform(pathToURI(path), new PutRequestProcessor<Integer>() {
-
-      @Override
-      public Integer extractResult(HttpPut req, HttpResponse resp)
-          throws IOException {
-        return resp.getStatusLine().getStatusCode();
-      }
-
-      @Override
-      protected void setup(HttpPut req) throws
-                    SwiftInternalStateException {
-        setHeaders(req, requestHeaders);
-      }
-    });
-  }
-
-  /**
-   * Authenticate to Openstack Keystone.
-   * As well as returning the access token, the member fields {@link #token},
-   * {@link #endpointURI} and {@link #objectLocationURI} are set up for re-use.
-   * <p>
-   * This method is re-entrant -if more than one thread attempts to authenticate
-   * neither will block -but the field values with have those of the last caller.
-   *
-   * @return authenticated access token
-   */
-  public AccessToken authenticate() throws IOException {
-    final AuthenticationRequest authenticationRequest;
-    if (useKeystoneAuthentication) {
-      authenticationRequest = keystoneAuthRequest;
-    } else {
-      authenticationRequest = authRequest;
-    }
-
-    LOG.debug("started authentication");
-    return perform("authentication",
-                   authUri,
-                   new AuthenticationPost(authenticationRequest));
-  }
-
-  private final class AuthenticationPost extends
-      AuthRequestProcessor<AccessToken> {
-    final AuthenticationRequest authenticationRequest;
-
-    private AuthenticationPost(AuthenticationRequest authenticationRequest) {
-      this.authenticationRequest = authenticationRequest;
-    }
-
-    @Override
-    protected void setup(AuthPostRequest req) throws IOException {
-      req.setEntity(getAuthenticationRequst(authenticationRequest));
-    }
-
-    /**
-     * specification says any of the 2xxs are OK, so list all
-     * the standard ones
-     * @return a set of 2XX status codes.
-     */
-    @Override
-    protected int[] getAllowedStatusCodes() {
-      return new int[]{
-        SC_OK,
-        SC_BAD_REQUEST,
-        SC_CREATED,
-        SC_ACCEPTED,
-        SC_NON_AUTHORITATIVE_INFORMATION,
-        SC_NO_CONTENT,
-        SC_RESET_CONTENT,
-        SC_PARTIAL_CONTENT,
-        SC_MULTI_STATUS,
-        SC_UNAUTHORIZED //if request unauthorized, try another method
-      };
-    }
-
-    @Override
-    public AccessToken extractResult(AuthPostRequest req, HttpResponse resp)
-        throws IOException {
-      //initial check for failure codes leading to authentication failures
-      if (resp.getStatusLine().getStatusCode() == SC_BAD_REQUEST) {
-        throw new SwiftAuthenticationFailedException(
-       authenticationRequest.toString(), "POST", authUri, resp);
-      }
-
-      final AuthenticationResponse access =
-          JSONUtil.toObject(HttpResponseUtils.getResponseBodyAsString(resp),
-                            AuthenticationWrapper.class).getAccess();
-      final List<Catalog> serviceCatalog = access.getServiceCatalog();
-      //locate the specific service catalog that defines Swift; variations
-      //in the name of this add complexity to the search
-      StringBuilder catList = new StringBuilder();
-      StringBuilder regionList = new StringBuilder();
-
-      //these fields are all set together at the end of the operation
-      URI endpointURI = null;
-      URI objectLocation;
-      Endpoint swiftEndpoint = null;
-      AccessToken accessToken;
-
-      for (Catalog catalog : serviceCatalog) {
-        String name = catalog.getName();
-        String type = catalog.getType();
-        String descr = String.format("[%s: %s]; ", name, type);
-        catList.append(descr);
-        if (LOG.isDebugEnabled()) {
-          LOG.debug("Catalog entry " + descr);
-        }
-        if (name.equals(SERVICE_CATALOG_SWIFT)
-            || name.equals(SERVICE_CATALOG_CLOUD_FILES)
-            || type.equals(SERVICE_CATALOG_OBJECT_STORE)) {
-          //swift is found
-          if (LOG.isDebugEnabled()) {
-            LOG.debug("Found swift catalog as " + name + " => " + type);
-          }
-          //now go through the endpoints
-          for (Endpoint endpoint : catalog.getEndpoints()) {
-            String endpointRegion = endpoint.getRegion();
-            URI publicURL = endpoint.getPublicURL();
-            URI internalURL = endpoint.getInternalURL();
-            descr = String.format("[%s => %s / %s]; ",
-                                  endpointRegion,
-                                  publicURL,
-                                  internalURL);
-            regionList.append(descr);
-            if (LOG.isDebugEnabled()) {
-              LOG.debug("Endpoint " + descr);
-            }
-            if (region == null || endpointRegion.equals(region)) {
-              endpointURI = usePublicURL ? publicURL : internalURL;
-              swiftEndpoint = endpoint;
-              break;
-            }
-          }
-        }
-      }
-      if (endpointURI == null) {
-        String message = "Could not find swift service from auth URL "
-                         + authUri
-                         + " and region '" + region + "'. "
-                         + "Categories: " + catList
-                         + ((regionList.length() > 0) ?
-                            ("regions: " + regionList)
-                                                      : "No regions");
-        throw new SwiftInvalidResponseException(message,
-                                                SC_OK,
-                                                "authenticating",
-                                                authUri);
-
-      }
-
-
-      accessToken = access.getToken();
-      String path = SWIFT_OBJECT_AUTH_ENDPOINT
-                    + swiftEndpoint.getTenantId();
-      String host = endpointURI.getHost();
-      try {
-        objectLocation = new URI(endpointURI.getScheme(),
-                                 null,
-                                 host,
-                                 endpointURI.getPort(),
-                                 path,
-                                 null,
-                                 null);
-      } catch (URISyntaxException e) {
-        throw new SwiftException("object endpoint URI is incorrect: "
-                                 + endpointURI
-                                 + " + " + path,
-                                 e);
-      }
-      setAuthDetails(endpointURI, objectLocation, accessToken);
-
-      if (LOG.isDebugEnabled()) {
-        LOG.debug("authenticated against " + endpointURI);
-      }
-      createDefaultContainer();
-      return accessToken;
-    }
-  }
-
-  private StringEntity getAuthenticationRequst(
-      AuthenticationRequest authenticationRequest) throws IOException {
-    final String data = JSONUtil.toJSON(new AuthenticationRequestWrapper(
-            authenticationRequest));
-    if (LOG.isDebugEnabled()) {
-      LOG.debug("Authenticating with " + authenticationRequest);
-    }
-    return new StringEntity(data, ContentType.create("application/json",
-        "UTF-8"));
-  }
-
-  /**
-   * create default container if it doesn't exist for Hadoop Swift integration.
-   * non-reentrant, as this should only be needed once.
-   *
-   * @throws IOException IO problems.
-   */
-  private synchronized void createDefaultContainer() throws IOException {
-    createContainer(container);
-  }
-
-  /**
-   * Create a container -if it already exists, do nothing.
-   *
-   * @param containerName the container name
-   * @throws IOException IO problems
-   * @throws SwiftBadRequestException invalid container name
-   * @throws SwiftInvalidResponseException error from the server
-   */
-  public void createContainer(String containerName) throws IOException {
-    SwiftObjectPath objectPath = new SwiftObjectPath(containerName, "");
-    try {
-      //see if the data is there
-      headRequest("createContainer", objectPath, NEWEST);
-    } catch (FileNotFoundException ex) {
-      int status = 0;
-      try {
-        status = putRequest(objectPath);
-      } catch (FileNotFoundException e) {
-        //triggered by a very bad container name.
-        //re-insert the 404 result into the status
-        status = SC_NOT_FOUND;
-      }
-      if (status == SC_BAD_REQUEST) {
-        throw new SwiftBadRequestException(
-          "Bad request -authentication failure or bad container name?",
-          status,
-          "PUT",
-          null);
-      }
-      if (!isStatusCodeExpected(status,
-              SC_OK,
-              SC_CREATED,
-              SC_ACCEPTED,
-              SC_NO_CONTENT)) {
-        throw new SwiftInvalidResponseException("Couldn't create container "
-                + containerName +
-                " for storing data in Swift." +
-                " Try to create container " +
-                containerName + " manually ",
-                status,
-                "PUT",
-                null);
-      } else {
-        throw ex;
-      }
-    }
-  }
-
-  /**
-   * Trigger an initial auth operation if some of the needed
-   * fields are missing.
-   *
-   * @throws IOException on problems
-   */
-  private void authIfNeeded() throws IOException {
-    if (getEndpointURI() == null) {
-      authenticate();
-    }
-  }
-
-  /**
-   * Pre-execution actions to be performed by methods. Currently this
-   * <ul>
-   *   <li>Logs the operation at TRACE</li>
-   *   <li>Authenticates the client -if needed</li>
-   * </ul>
-   * @throws IOException
-   */
-  private void preRemoteCommand(String operation) throws IOException {
-    if (LOG.isTraceEnabled()) {
-      LOG.trace("Executing " + operation);
-    }
-    authIfNeeded();
-  }
-
-
-  /**
-   * Performs the HTTP request, validates the response code and returns
-   * the received data. HTTP Status codes are converted into exceptions.
-   *
-   * @param uri URI to source
-   * @param processor HttpMethodProcessor
-   * @param <M> method
-   * @param <R> result type
-   * @return result of HTTP request
-   * @throws IOException IO problems
-   * @throws SwiftBadRequestException the status code indicated "Bad request"
-   * @throws SwiftInvalidResponseException the status code is out of range
-   * for the action (excluding 404 responses)
-   * @throws SwiftInternalStateException the internal state of this client
-   * is invalid
-   * @throws FileNotFoundException a 404 response was returned
-   */
-  private <M extends HttpRequestBase, R> R perform(URI uri,
-                      HttpRequestProcessor<M, R> processor)
-    throws IOException,
-           SwiftBadRequestException,
-           SwiftInternalStateException,
-           SwiftInvalidResponseException,
-           FileNotFoundException {
-    return perform("",uri, processor);
-  }
-
-  /**
-   * Performs the HTTP request, validates the response code and returns
-   * the received data. HTTP Status codes are converted into exceptions.
-   * @param reason why is this operation taking place. Used for statistics
-   * @param uri URI to source
-   * @param processor HttpMethodProcessor
-   * @param <M> method
-   * @param <R> result type
-   * @return result of HTTP request
-   * @throws IOException IO problems
-   * @throws SwiftBadRequestException the status code indicated "Bad request"
-   * @throws SwiftInvalidResponseException the status code is out of range
-   * for the action (excluding 404 responses)
-   * @throws SwiftInternalStateException the internal state of this client
-   * is invalid
-   * @throws FileNotFoundException a 404 response was returned
-   */
-  private <M extends HttpRequestBase, R> R perform(String reason, URI uri,
-      HttpRequestProcessor<M, R> processor)
-      throws IOException, SwiftBadRequestException, SwiftInternalStateException,
-            SwiftInvalidResponseException, FileNotFoundException {
-    checkNotNull(uri);
-    checkNotNull(processor);
-
-    final M req = processor.createRequest(uri.toString());
-    req.addHeader(HEADER_USER_AGENT, SWIFT_USER_AGENT);
-    //retry policy
-    HttpClientBuilder clientBuilder = HttpClientBuilder.create();
-    clientBuilder.setRetryHandler(
-        new DefaultHttpRequestRetryHandler(retryCount, false));
-    RequestConfig.Builder requestConfigBuilder =
-        RequestConfig.custom().setConnectTimeout(connectTimeout);
-    if (proxyHost != null) {
-      requestConfigBuilder.setProxy(new HttpHost(proxyHost, proxyPort));
-    }
-    clientBuilder.setDefaultRequestConfig(requestConfigBuilder.build());
-    clientBuilder.setDefaultSocketConfig(
-        SocketConfig.custom().setSoTimeout(socketTimeout).build());
-    Duration duration = new Duration();
-    boolean success = false;
-    HttpResponse resp;
-    try {
-      // client should not be closed in this method because
-      // the connection can be used later
-      CloseableHttpClient client = clientBuilder.build();
-      int statusCode = 0;
-      try {
-        resp = exec(client, req);
-        statusCode = checkNotNull(resp.getStatusLine().getStatusCode());
-      } catch (IOException e) {
-        //rethrow with extra diagnostics and wiki links
-        throw ExceptionDiags.wrapException(uri.toString(), req.getMethod(), e);
-      }
-
-      //look at the response and see if it was valid or not.
-      //Valid is more than a simple 200; even 404 "not found" is considered
-      //valid -which it is for many methods.
-
-      //validate the allowed status code for this operation
-      int[] allowedStatusCodes = processor.getAllowedStatusCodes();
-      boolean validResponse = isStatusCodeExpected(statusCode,
-          allowedStatusCodes);
-
-      if (!validResponse) {
-        IOException ioe = buildException(uri, req, resp, statusCode);
-        throw ioe;
-      }
-
-      R r = processor.extractResult(req, resp);
-      success = true;
-      return r;
-    } catch (IOException e) {
-      //release the connection -always
-      req.releaseConnection();
-      throw e;
-    } finally {
-      duration.finished();
-      durationStats.add(req.getMethod() + " " + reason, duration, success);
-    }
-  }
-
-  /**
-   * Build an exception from a failed operation. This can include generating
-   * specific exceptions (e.g. FileNotFound), as well as the default
-   * {@link SwiftInvalidResponseException}.
-   *
-   * @param uri URI for operation
-   * @param resp operation that failed
-   * @param statusCode status code
-   * @param <M> method type
-   * @return an exception to throw
-   */
-  private <M extends HttpUriRequest> IOException buildException(
-      URI uri, M req, HttpResponse resp, int statusCode) {
-    IOException fault;
-
-    //log the failure @debug level
-    String errorMessage = String.format("Method %s on %s failed, status code: %d," +
-            " status line: %s",
-            req.getMethod(),
-            uri,
-            statusCode,
-            resp.getStatusLine()
-    );
-    if (LOG.isDebugEnabled()) {
-      LOG.debug(errorMessage);
-    }
-    //send the command
-    switch (statusCode) {
-    case SC_NOT_FOUND:
-      fault = new FileNotFoundException("Operation " + req.getMethod()
-          + " on " + uri);
-      break;
-
-    case SC_BAD_REQUEST:
-      //bad HTTP request
-      fault =  new SwiftBadRequestException("Bad request against " + uri,
-          req.getMethod(), uri, resp);
-      break;
-
-    case SC_REQUESTED_RANGE_NOT_SATISFIABLE:
-      //out of range
-      StringBuilder errorText = new StringBuilder(
-          resp.getStatusLine().getReasonPhrase());
-      //get the requested length
-      Header requestContentLen = req.getFirstHeader(HEADER_CONTENT_LENGTH);
-      if (requestContentLen != null) {
-        errorText.append(" requested ").append(requestContentLen.getValue());
-      }
-      //and the result
-      Header availableContentRange = resp.getFirstHeader(HEADER_CONTENT_RANGE);
-
-      if (availableContentRange != null) {
-        errorText.append(" available ")
-            .append(availableContentRange.getValue());
-      }
-      fault = new EOFException(errorText.toString());
-      break;
-
-    case SC_UNAUTHORIZED:
-      //auth failure; should only happen on the second attempt
-      fault  = new SwiftAuthenticationFailedException(
-          "Operation not authorized- current access token =" + getToken(),
-          req.getMethod(),
-          uri,
-          resp);
-      break;
-
-    case SwiftProtocolConstants.SC_TOO_MANY_REQUESTS_429:
-    case SwiftProtocolConstants.SC_THROTTLED_498:
-      //response code that may mean the client is being throttled
-      fault  = new SwiftThrottledRequestException(
-          "Client is being throttled: too many requests",
-          req.getMethod(),
-          uri,
-          resp);
-      break;
-
-    default:
-      //return a generic invalid HTTP response
-      fault = new SwiftInvalidResponseException(
-          errorMessage,
-          req.getMethod(),
-          uri,
-          resp);
-    }
-
-    return fault;
-  }
-
-  /**
-   * Exec a GET request and return the input stream of the response.
-   *
-   * @param uri URI to GET
-   * @param requestHeaders request headers
-   * @return the input stream. This must be closed to avoid log errors
-   * @throws IOException
-   */
-  private HttpBodyContent doGet(final URI uri, final Header... requestHeaders) throws IOException {
-    return perform("", uri, new GetRequestProcessor<HttpBodyContent>() {
-      @Override
-      public HttpBodyContent extractResult(HttpGet req, HttpResponse resp)
-          throws IOException {
-        return new HttpBodyContent(
-            new HttpInputStreamWithRelease(uri, req, resp),
-            HttpResponseUtils.getContentLength(resp));
-      }
-
-      @Override
-      protected void setup(HttpGet req) throws
-                    SwiftInternalStateException {
-        setHeaders(req, requestHeaders);
-      }
-    });
-  }
-
-  /**
-   * Create an instance against a specific FS URI.
-   *
-   * @param filesystemURI filesystem to bond to
-   * @param config source of configuration data
-   * @return REST client instance
-   * @throws IOException on instantiation problems
-   */
-  public static SwiftRestClient getInstance(URI filesystemURI,
-                                            Configuration config) throws IOException {
-    return new SwiftRestClient(filesystemURI, config);
-  }
-
-
-  /**
-   * Converts Swift path to URI to make request.
-   * This is public for unit testing
-   *
-   * @param path path to object
-   * @param endpointURI domain url e.g. http://domain.com
-   * @return valid URI for object
-   * @throws SwiftException
-   */
-  public static URI pathToURI(SwiftObjectPath path,
-                              URI endpointURI) throws SwiftException {
-    checkNotNull(endpointURI, "Null Endpoint -client is not authenticated");
-
-    String dataLocationURI = endpointURI.toString();
-    try {
-
-      dataLocationURI = SwiftUtils.joinPaths(dataLocationURI, encodeUrl(path.toUriPath()));
-      return new URI(dataLocationURI);
-    } catch (URISyntaxException e) {
-      throw new SwiftException("Failed to create URI from " + dataLocationURI, e);
-    }
-  }
-
-  /**
-   * Encode the URL. This extends {@link URLEncoder#encode(String, String)}
-   * with a replacement of + with %20.
-   * @param url URL string
-   * @return an encoded string
-   * @throws SwiftException if the URL cannot be encoded
-   */
-  private static String encodeUrl(String url) throws SwiftException {
-    if (url.matches(".*\\s+.*")) {
-      try {
-        url = URLEncoder.encode(url, "UTF-8");
-        url = url.replace("+", "%20");
-      } catch (UnsupportedEncodingException e) {
-        throw new SwiftException("failed to encode URI", e);
-      }
-    }
-
-    return url;
-  }
-
-  /**
-   * Convert a swift path to a URI relative to the current endpoint.
-   *
-   * @param path path
-   * @return an path off the current endpoint URI.
-   * @throws SwiftException
-   */
-  private URI pathToURI(SwiftObjectPath path) throws SwiftException {
-    return pathToURI(path, getEndpointURI());
-  }
-
-  /**
-   * Add the headers to the method, and the auth token (which must be set).
-   * @param method method to update
-   * @param requestHeaders the list of headers
-   * @throws SwiftInternalStateException not yet authenticated
-   */
-  private void setHeaders(HttpUriRequest method, Header[] requestHeaders)
-      throws SwiftInternalStateException {
-    for (Header header : requestHeaders) {
-      method.addHeader(header);
-    }
-    setAuthToken(method, getToken());
-  }
-
-
-  /**
-   * Set the auth key header of the method to the token ID supplied.
-   *
-   * @param method method
-   * @param accessToken access token
-   * @throws SwiftInternalStateException if the client is not yet authenticated
-   */
-  private void setAuthToken(HttpUriRequest method, AccessToken accessToken)
-      throws SwiftInternalStateException {
-    checkNotNull(accessToken,"Not authenticated");
-    method.addHeader(HEADER_AUTH_KEY, accessToken.getId());
-  }
-
-  /**
-   * Execute a method in a new HttpClient instance. If the auth failed,
-   * authenticate then retry the method.
-   *
-   * @param req request to exec
-   * @param client client to use
-   * @param <M> Request type
-   * @return the status code
-   * @throws IOException on any failure
-   */
-  private <M extends HttpUriRequest> HttpResponse exec(HttpClient client, M req)
-      throws IOException {
-    HttpResponse resp = execWithDebugOutput(req, client);
-    int statusCode = resp.getStatusLine().getStatusCode();
-    if ((statusCode == HttpStatus.SC_UNAUTHORIZED
-            || statusCode == HttpStatus.SC_BAD_REQUEST)
-        && req instanceof AuthPostRequest
-            && !useKeystoneAuthentication) {
-      if (LOG.isDebugEnabled()) {
-        LOG.debug("Operation failed with status " + statusCode
-            + " attempting keystone auth");
-      }
-      //if rackspace key authentication failed - try custom Keystone authentication
-      useKeystoneAuthentication = true;
-      final AuthPostRequest authentication = (AuthPostRequest) req;
-      //replace rackspace auth with keystone one
-      authentication.setEntity(getAuthenticationRequst(keystoneAuthRequest));
-      resp = execWithDebugOutput(req, client);
-    }
-
-    if (statusCode == HttpStatus.SC_UNAUTHORIZED ) {
-      //unauthed -or the auth uri rejected it.
-
-      if (req instanceof AuthPostRequest) {
-          //unauth response from the AUTH URI itself.
-          throw new SwiftAuthenticationFailedException(authRequest.toString(),
-                                                       "auth",
-                                                       authUri,
-                                                       resp);
-      }
-      //any other URL: try again
-      if (LOG.isDebugEnabled()) {
-        LOG.debug("Reauthenticating");
-      }
-      //re-auth, this may recurse into the same dir
-      authenticate();
-      if (LOG.isDebugEnabled()) {
-        LOG.debug("Retrying original request");
-      }
-      resp = execWithDebugOutput(req, client);
-    }
-    return resp;
-  }
-
-  /**
-   * Execute the request with the request and response logged at debug level.
-   * @param req request to execute
-   * @param client client to use
-   * @param <M> method type
-   * @return the status code
-   * @throws IOException any failure reported by the HTTP client.
-   */
-  private <M extends HttpUriRequest> HttpResponse execWithDebugOutput(M req,
-      HttpClient client) throws IOException {
-    if (LOG.isDebugEnabled()) {
-      StringBuilder builder = new StringBuilder(
-              req.getMethod() + " " + req.getURI() + "\n");
-      for (Header header : req.getAllHeaders()) {
-        builder.append(header.toString());
-      }
-      LOG.debug(builder.toString());
-    }
-    HttpResponse resp = client.execute(req);
-    if (LOG.isDebugEnabled()) {
-      LOG.debug("Status code = " + resp.getStatusLine().getStatusCode());
-    }
-    return resp;
-  }
-
-  /**
-   * Ensures that an object reference passed as a parameter to the calling
-   * method is not null.
-   *
-   * @param reference an object reference
-   * @return the non-null reference that was validated
-   * @throws NullPointerException if {@code reference} is null
-   */
-  private static <T> T checkNotNull(T reference) throws
-            SwiftInternalStateException {
-    return checkNotNull(reference, "Null Reference");
-  }
-
-  private static <T> T checkNotNull(T reference, String message) throws
-            SwiftInternalStateException {
-    if (reference == null) {
-      throw new SwiftInternalStateException(message);
-    }
-    return reference;
-  }
-
-  /**
-   * Check for a status code being expected -takes a list of expected values
-   *
-   * @param status received status
-   * @param expected expected value
-   * @return true if status is an element of [expected]
-   */
-  private boolean isStatusCodeExpected(int status, int... expected) {
-    for (int code : expected) {
-      if (status == code) {
-        return true;
-      }
-    }
-    return false;
-  }
-
-
-  @Override
-  public String toString() {
-    return "Swift client: " + serviceDescription;
-  }
-
-  /**
-   * Get the region which this client is bound to
-   * @return the region
-   */
-  public String getRegion() {
-    return region;
-  }
-
-  /**
-   * Get the tenant to which this client is bound
-   * @return the tenant
-   */
-  public String getTenant() {
-    return tenant;
-  }
-
-  /**
-   * Get the username this client identifies itself as
-   * @return the username
-   */
-  public String getUsername() {
-    return username;
-  }
-
-  /**
-   * Get the container to which this client is bound
-   * @return the container
-   */
-  public String getContainer() {
-    return container;
-  }
-
-  /**
-   * Is this client bound to a location aware Swift blobstore
-   * -that is, can you query for the location of partitions?
-   * @return true iff the location of multipart file uploads
-   * can be determined.
-   */
-  public boolean isLocationAware() {
-    return locationAware;
-  }
-
-  /**
-   * Get the blocksize of this filesystem
-   * @return a blocksize &gt; 0
-   */
-  public long getBlocksizeKB() {
-    return blocksizeKB;
-  }
-
-  /**
-   * Get the partition size in KB.
-   * @return the partition size
-   */
-  public int getPartSizeKB() {
-    return partSizeKB;
-  }
-
-  /**
-   * Get the buffer size in KB.
-   * @return the buffer size wanted for reads
-   */
-  public int getBufferSizeKB() {
-    return bufferSizeKB;
-  }
-
-  public int getProxyPort() {
-    return proxyPort;
-  }
-
-  public String getProxyHost() {
-    return proxyHost;
-  }
-
-  public int getRetryCount() {
-    return retryCount;
-  }
-
-  public int getConnectTimeout() {
-    return connectTimeout;
-  }
-
-  public boolean isUsePublicURL() {
-    return usePublicURL;
-  }
-
-  public int getThrottleDelay() {
-    return throttleDelay;
-  }
-
-  /**
-   * Get the current operation statistics.
-   * @return a snapshot of the statistics
-   */
-
-  public List<DurationStats> getOperationStatistics() {
-    return durationStats.getDurationStatistics();
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/package.html b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/package.html
deleted file mode 100644
index ad900f90d06..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/package.html
+++ /dev/null
@@ -1,81 +0,0 @@
-<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
-        "http://www.w3.org/TR/html4/loose.dtd">
-<!--
-  ~ Licensed to the Apache Software Foundation (ASF) under one
-  ~  or more contributor license agreements.  See the NOTICE file
-  ~  distributed with this work for additional information
-  ~  regarding copyright ownership.  The ASF licenses this file
-  ~  to you under the Apache License, Version 2.0 (the
-  ~  "License"); you may not use this file except in compliance
-  ~  with the License.  You may obtain a copy of the License at
-  ~
-  ~       http://www.apache.org/licenses/LICENSE-2.0
-  ~
-  ~  Unless required by applicable law or agreed to in writing, software
-  ~  distributed under the License is distributed on an "AS IS" BASIS,
-  ~  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  ~  See the License for the specific language governing permissions and
-  ~  limitations under the License.
-  -->
-
-<html>
-<head>
-    <title>Swift Filesystem Client for Apache Hadoop</title>
-</head>
-<body>
-
-<h1>
-    Swift Filesystem Client for Apache Hadoop
-</h1>
-
-<h2>Introduction</h2>
-
-<div>This package provides support in Apache Hadoop for the OpenStack Swift
-    Key-Value store, allowing client applications -including MR Jobs- to
-    read and write data in Swift.
-</div>
-
-<div>Design Goals</div>
-<ol>
-    <li>Give clients access to SwiftFS files, similar to S3n:</li>
-    <li>maybe: support a Swift Block store -- at least until Swift's
-        support for &gt;5GB files has stabilized.
-    </li>
-    <li>Support for data-locality if the Swift FS provides file location information</li>
-    <li>Support access to multiple Swift filesystems in the same client/task.</li>
-    <li>Authenticate using the Keystone APIs.</li>
-    <li>Avoid dependency on unmaintained libraries.</li>
-</ol>
-
-
-<h2>Supporting multiple Swift Filesystems</h2>
-
-The goal of supporting multiple swift filesystems simultaneously changes how
-clusters are named and authenticated. In Hadoop's S3 and S3N filesystems, the "bucket" into
-which objects are stored is directly named in the URL, such as
-<code>s3n://bucket/object1</code>. The Hadoop configuration contains a
-single set of login credentials for S3 (username and key), which are used to
-authenticate the HTTP operations.
-
-For swift, we need to know not only which "container" name, but which credentials
-to use to authenticate with it -and which URL to use for authentication.
-
-This has led to a different design pattern from S3, as instead of simple bucket names,
-the hostname of an S3 container is two-level, the name of the service provider
-being the second path: <code>swift://bucket.service/</code>
-
-The <code>service</code> portion of this domain name is used as a reference into
-the client settings -and so identify the service provider of that container.
-
-
-<h2>Testing</h2>
-
-<div>
-    The client code can be tested against public or private Swift instances; the
-    public services are (at the time of writing -January 2013-), Rackspace and
-    HP Cloud. Testing against both instances is how interoperability
-    can be verified.
-</div>
-
-</body>
-</html>
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/StrictBufferedFSInputStream.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/StrictBufferedFSInputStream.java
deleted file mode 100644
index 794219f31a4..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/StrictBufferedFSInputStream.java
+++ /dev/null
@@ -1,49 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.snative;
-
-import org.apache.hadoop.fs.BufferedFSInputStream;
-import org.apache.hadoop.fs.FSExceptionMessages;
-import org.apache.hadoop.fs.FSInputStream;
-import org.apache.hadoop.fs.swift.exceptions.SwiftConnectionClosedException;
-
-import java.io.EOFException;
-import java.io.IOException;
-
-/**
- * Add stricter compliance with the evolving FS specifications
- */
-public class StrictBufferedFSInputStream extends BufferedFSInputStream {
-
-  public StrictBufferedFSInputStream(FSInputStream in,
-                                     int size) {
-    super(in, size);
-  }
-
-  @Override
-  public void seek(long pos) throws IOException {
-    if (pos < 0) {
-      throw new EOFException(FSExceptionMessages.NEGATIVE_SEEK);
-    }
-    if (in == null) {
-      throw new SwiftConnectionClosedException(FSExceptionMessages.STREAM_IS_CLOSED);
-    }
-    super.seek(pos);
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftFileStatus.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftFileStatus.java
deleted file mode 100644
index 725cae1e3b8..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftFileStatus.java
+++ /dev/null
@@ -1,102 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.snative;
-
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.permission.FsPermission;
-
-/**
- * A subclass of {@link FileStatus} that contains the
- * Swift-specific rules of when a file is considered to be a directory.
- */
-public class SwiftFileStatus extends FileStatus {
-
-  public SwiftFileStatus() {
-  }
-
-  public SwiftFileStatus(long length,
-                         boolean isdir,
-                         int block_replication,
-                         long blocksize, long modification_time, Path path) {
-    super(length, isdir, block_replication, blocksize, modification_time, path);
-  }
-
-  public SwiftFileStatus(long length,
-                         boolean isdir,
-                         int block_replication,
-                         long blocksize,
-                         long modification_time,
-                         long access_time,
-                         FsPermission permission,
-                         String owner, String group, Path path) {
-    super(length, isdir, block_replication, blocksize, modification_time,
-            access_time, permission, owner, group, path);
-  }
-
-  //HDFS2+ only
-
-  public SwiftFileStatus(long length,
-                         boolean isdir,
-                         int block_replication,
-                         long blocksize,
-                         long modification_time,
-                         long access_time,
-                         FsPermission permission,
-                         String owner, String group, Path symlink, Path path) {
-    super(length, isdir, block_replication, blocksize, modification_time,
-          access_time, permission, owner, group, symlink, path);
-  }
-
-  /**
-   * Declare that the path represents a directory, which in the
-   * SwiftNativeFileSystem means "is a directory or a 0 byte file"
-   *
-   * @return true if the status is considered to be a file
-   */
-  @Override
-  public boolean isDirectory() {
-    return super.isDirectory() || getLen() == 0;
-  }
-
-  /**
-   * A entry is a file if it is not a directory.
-   * By implementing it <i>and not marking as an override</i> this
-   * subclass builds and runs in both Hadoop versions.
-   * @return the opposite value to {@link #isDirectory()}
-   */
-  @Override
-  public boolean isFile() {
-    return !this.isDirectory();
-  }
-
-  @Override
-  public String toString() {
-    StringBuilder sb = new StringBuilder();
-    sb.append(getClass().getSimpleName());
-    sb.append("{ ");
-    sb.append("path=").append(getPath());
-    sb.append("; isDirectory=").append(isDirectory());
-    sb.append("; length=").append(getLen());
-    sb.append("; blocksize=").append(getBlockSize());
-    sb.append("; modification_time=").append(getModificationTime());
-    sb.append("}");
-    return sb.toString();
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystem.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystem.java
deleted file mode 100644
index 560eadd9309..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystem.java
+++ /dev/null
@@ -1,761 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.snative;
-
-import org.apache.hadoop.security.UserGroupInformation;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.apache.hadoop.classification.InterfaceAudience;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.BlockLocation;
-import org.apache.hadoop.fs.CreateFlag;
-import org.apache.hadoop.fs.FSDataInputStream;
-import org.apache.hadoop.fs.FSDataOutputStream;
-import org.apache.hadoop.fs.FileAlreadyExistsException;
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.ParentNotDirectoryException;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.permission.FsPermission;
-import org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException;
-import org.apache.hadoop.fs.swift.exceptions.SwiftOperationFailedException;
-import org.apache.hadoop.fs.swift.exceptions.SwiftUnsupportedFeatureException;
-import org.apache.hadoop.fs.swift.http.SwiftProtocolConstants;
-import org.apache.hadoop.fs.swift.util.DurationStats;
-import org.apache.hadoop.fs.swift.util.SwiftObjectPath;
-import org.apache.hadoop.fs.swift.util.SwiftUtils;
-import org.apache.hadoop.util.Progressable;
-
-import java.io.FileNotFoundException;
-import java.io.IOException;
-import java.io.OutputStream;
-import java.net.URI;
-import java.util.ArrayList;
-import java.util.EnumSet;
-import java.util.List;
-
-/**
- * Swift file system implementation. Extends Hadoop FileSystem
- */
-public class SwiftNativeFileSystem extends FileSystem {
-
-  /** filesystem prefix: {@value} */
-  public static final String SWIFT = "swift";
-  private static final Logger LOG =
-      LoggerFactory.getLogger(SwiftNativeFileSystem.class);
-
-  /**
-   * path to user work directory for storing temporary files
-   */
-  private Path workingDir;
-
-  /**
-   * Swift URI
-   */
-  private URI uri;
-
-  /**
-   * reference to swiftFileSystemStore
-   */
-  private SwiftNativeFileSystemStore store;
-
-  /**
-   * Default constructor for Hadoop
-   */
-  public SwiftNativeFileSystem() {
-    // set client in initialize()
-  }
-
-  /**
-   * This constructor used for testing purposes
-   */
-  public SwiftNativeFileSystem(SwiftNativeFileSystemStore store) {
-    this.store = store;
-  }
-
-  /**
-   * This is for testing
-   * @return the inner store class
-   */
-  public SwiftNativeFileSystemStore getStore() {
-    return store;
-  }
-
-  @Override
-  public String getScheme() {
-    return SWIFT;
-  }
-
-  /**
-   * default class initialization.
-   *
-   * @param fsuri path to Swift
-   * @param conf  Hadoop configuration
-   * @throws IOException
-   */
-  @Override
-  public void initialize(URI fsuri, Configuration conf) throws IOException {
-    super.initialize(fsuri, conf);
-
-    setConf(conf);
-    if (store == null) {
-      store = new SwiftNativeFileSystemStore();
-    }
-    this.uri = fsuri;
-    String username;
-    try {
-      username = UserGroupInformation.getCurrentUser().getShortUserName();
-    } catch (IOException ex) {
-      LOG.warn("Unable to get user name. Fall back to system property " +
-          "user.name", ex);
-      username = System.getProperty("user.name");
-    }
-    this.workingDir = new Path("/user", username)
-      .makeQualified(uri, new Path(username));
-    if (LOG.isDebugEnabled()) {
-      LOG.debug("Initializing SwiftNativeFileSystem against URI " + uri
-              + " and working dir " + workingDir);
-    }
-    store.initialize(uri, conf);
-    LOG.debug("SwiftFileSystem initialized");
-  }
-
-  /**
-   * @return path to Swift
-   */
-  @Override
-  public URI getUri() {
-
-    return uri;
-  }
-
-  @Override
-  public String toString() {
-    return "Swift FileSystem " + store;
-  }
-
-  /**
-   * Path to user working directory
-   *
-   * @return Hadoop path
-   */
-  @Override
-  public Path getWorkingDirectory() {
-    return workingDir;
-  }
-
-  /**
-   * @param dir user working directory
-   */
-  @Override
-  public void setWorkingDirectory(Path dir) {
-    workingDir = makeAbsolute(dir);
-    if (LOG.isDebugEnabled()) {
-      LOG.debug("SwiftFileSystem.setWorkingDirectory to " + dir);
-    }
-  }
-
-  /**
-   * Return a file status object that represents the path.
-   *
-   * @param path The path we want information from
-   * @return a FileStatus object
-   */
-  @Override
-  public FileStatus getFileStatus(Path path) throws IOException {
-    Path absolutePath = makeAbsolute(path);
-    return store.getObjectMetadata(absolutePath);
-  }
-
-  /**
-   * The blocksize of this filesystem is set by the property
-   * SwiftProtocolConstants.SWIFT_BLOCKSIZE;the default is the value of
-   * SwiftProtocolConstants.DEFAULT_SWIFT_BLOCKSIZE;
-   * @return the blocksize for this FS.
-   */
-  @Override
-  public long getDefaultBlockSize() {
-    return store.getBlocksize();
-  }
-
-  /**
-   * The blocksize for this filesystem.
-   * @see #getDefaultBlockSize()
-   * @param f path of file
-   * @return the blocksize for the path
-   */
-  @Override
-  public long getDefaultBlockSize(Path f) {
-    return store.getBlocksize();
-  }
-
-  @Override
-  public long getBlockSize(Path path) throws IOException {
-    return store.getBlocksize();
-  }
-
-  @Override
-  @SuppressWarnings("deprecation")
-  public boolean isFile(Path f) throws IOException {
-    try {
-      FileStatus fileStatus = getFileStatus(f);
-      return !SwiftUtils.isDirectory(fileStatus);
-    } catch (FileNotFoundException e) {
-      return false;               // f does not exist
-    }
-  }
-
-  @SuppressWarnings("deprecation")
-  @Override
-  public boolean isDirectory(Path f) throws IOException {
-
-    try {
-      FileStatus fileStatus = getFileStatus(f);
-      return SwiftUtils.isDirectory(fileStatus);
-    } catch (FileNotFoundException e) {
-      return false;               // f does not exist
-    }
-  }
-
-  /**
-   * Override getCononicalServiceName because we don't support token in Swift
-   */
-  @Override
-  public String getCanonicalServiceName() {
-    // Does not support Token
-    return null;
-  }
-
-  /**
-   * Return an array containing hostnames, offset and size of
-   * portions of the given file.  For a nonexistent
-   * file or regions, null will be returned.
-   * <p>
-   * This call is most helpful with DFS, where it returns
-   * hostnames of machines that contain the given file.
-   * <p>
-   * The FileSystem will simply return an elt containing 'localhost'.
-   */
-  @Override
-  public BlockLocation[] getFileBlockLocations(FileStatus file,
-                                               long start,
-                                               long len) throws IOException {
-    //argument checks
-    if (file == null) {
-      return null;
-    }
-
-    if (start < 0 || len < 0) {
-      throw new IllegalArgumentException("Negative start or len parameter" +
-                                         " to getFileBlockLocations");
-    }
-    if (file.getLen() <= start) {
-      return new BlockLocation[0];
-    }
-
-    // Check if requested file in Swift is more than 5Gb. In this case
-    // each block has its own location -which may be determinable
-    // from the Swift client API, depending on the remote server
-    final FileStatus[] listOfFileBlocks = store.listSubPaths(file.getPath(),
-                                                             false,
-                                                             true);
-    List<URI> locations = new ArrayList<URI>();
-    if (listOfFileBlocks.length > 1) {
-      for (FileStatus fileStatus : listOfFileBlocks) {
-        if (SwiftObjectPath.fromPath(uri, fileStatus.getPath())
-                .equals(SwiftObjectPath.fromPath(uri, file.getPath()))) {
-          continue;
-        }
-        locations.addAll(store.getObjectLocation(fileStatus.getPath()));
-      }
-    } else {
-      locations = store.getObjectLocation(file.getPath());
-    }
-
-    if (locations.isEmpty()) {
-      LOG.debug("No locations returned for " + file.getPath());
-      //no locations were returned for the object
-      //fall back to the superclass
-
-      String[] name = {SwiftProtocolConstants.BLOCK_LOCATION};
-      String[] host = { "localhost" };
-      String[] topology={SwiftProtocolConstants.TOPOLOGY_PATH};
-      return new BlockLocation[] {
-        new BlockLocation(name, host, topology,0, file.getLen())
-      };
-    }
-
-    final String[] names = new String[locations.size()];
-    final String[] hosts = new String[locations.size()];
-    int i = 0;
-    for (URI location : locations) {
-      hosts[i] = location.getHost();
-      names[i] = location.getAuthority();
-      i++;
-    }
-    return new BlockLocation[]{
-            new BlockLocation(names, hosts, 0, file.getLen())
-    };
-  }
-
-  /**
-   * Create the parent directories.
-   * As an optimization, the entire hierarchy of parent
-   * directories is <i>Not</i> polled. Instead
-   * the tree is walked up from the last to the first,
-   * creating directories until one that exists is found.
-   *
-   * This strategy means if a file is created in an existing directory,
-   * one quick poll suffices.
-   *
-   * There is a big assumption here: that all parent directories of an existing
-   * directory also exists.
-   * @param path path to create.
-   * @param permission to apply to files
-   * @return true if the operation was successful
-   * @throws IOException on a problem
-   */
-  @Override
-  public boolean mkdirs(Path path, FsPermission permission) throws IOException {
-    if (LOG.isDebugEnabled()) {
-      LOG.debug("SwiftFileSystem.mkdirs: " + path);
-    }
-    Path directory = makeAbsolute(path);
-
-    //build a list of paths to create
-    List<Path> paths = new ArrayList<Path>();
-    while (shouldCreate(directory)) {
-      //this directory needs creation, add to the list
-      paths.add(0, directory);
-      //now see if the parent needs to be created
-      directory = directory.getParent();
-    }
-
-    //go through the list of directories to create
-    for (Path p : paths) {
-      if (isNotRoot(p)) {
-        //perform a mkdir operation without any polling of
-        //the far end first
-        forceMkdir(p);
-      }
-    }
-
-    //if an exception was not thrown, this operation is considered
-    //a success
-    return true;
-  }
-
-  private boolean isNotRoot(Path absolutePath) {
-    return !isRoot(absolutePath);
-  }
-
-  private boolean isRoot(Path absolutePath) {
-    return absolutePath.getParent() == null;
-  }
-
-  /**
-   * internal implementation of directory creation.
-   *
-   * @param path path to file
-   * @return boolean file is created; false: no need to create
-   * @throws IOException if specified path is file instead of directory
-   */
-  private boolean mkdir(Path path) throws IOException {
-    Path directory = makeAbsolute(path);
-    boolean shouldCreate = shouldCreate(directory);
-    if (shouldCreate) {
-      forceMkdir(directory);
-    }
-    return shouldCreate;
-  }
-
-  /**
-   * Should mkdir create this directory?
-   * If the directory is root : false
-   * If the entry exists and is a directory: false
-   * If the entry exists and is a file: exception
-   * else: true
-   * @param directory path to query
-   * @return true iff the directory should be created
-   * @throws IOException IO problems
-   * @throws ParentNotDirectoryException if the path references a file
-   */
-  private boolean shouldCreate(Path directory) throws IOException {
-    FileStatus fileStatus;
-    boolean shouldCreate;
-    if (isRoot(directory)) {
-      //its the base dir, bail out immediately
-      return false;
-    }
-    try {
-      //find out about the path
-      fileStatus = getFileStatus(directory);
-
-      if (!SwiftUtils.isDirectory(fileStatus)) {
-        //if it's a file, raise an error
-        throw new ParentNotDirectoryException(
-                String.format("%s: can't mkdir since it exists and is not a directory: %s",
-                    directory, fileStatus));
-      } else {
-        //path exists, and it is a directory
-        if (LOG.isDebugEnabled()) {
-          LOG.debug("skipping mkdir(" + directory + ") as it exists already");
-        }
-        shouldCreate = false;
-      }
-    } catch (FileNotFoundException e) {
-      shouldCreate = true;
-    }
-    return shouldCreate;
-  }
-
-  /**
-   * mkdir of a directory -irrespective of what was there underneath.
-   * There are no checks for the directory existing, there not
-   * being a path there, etc. etc. Those are assumed to have
-   * taken place already
-   * @param absolutePath path to create
-   * @throws IOException IO problems
-   */
-  private void forceMkdir(Path absolutePath) throws IOException {
-    if (LOG.isDebugEnabled()) {
-      LOG.debug("Making dir '" + absolutePath + "' in Swift");
-    }
-    //file is not found: it must be created
-    store.createDirectory(absolutePath);
-  }
-
-  /**
-   * List the statuses of the files/directories in the given path if the path is
-   * a directory.
-   *
-   * @param path given path
-   * @return the statuses of the files/directories in the given path
-   * @throws IOException
-   */
-  @Override
-  public FileStatus[] listStatus(Path path) throws IOException {
-    if (LOG.isDebugEnabled()) {
-      LOG.debug("SwiftFileSystem.listStatus for: " + path);
-    }
-    return store.listSubPaths(makeAbsolute(path), false, true);
-  }
-
-  /**
-   * This optional operation is not supported
-   */
-  @Override
-  public FSDataOutputStream append(Path f, int bufferSize, Progressable progress)
-      throws IOException {
-    LOG.debug("SwiftFileSystem.append");
-    throw new SwiftUnsupportedFeatureException("Not supported: append()");
-  }
-
-  /**
-   * @param permission Currently ignored.
-   */
-  @Override
-  public FSDataOutputStream create(Path file, FsPermission permission,
-                                   boolean overwrite, int bufferSize,
-                                   short replication, long blockSize,
-                                   Progressable progress)
-          throws IOException {
-    LOG.debug("SwiftFileSystem.create");
-
-    FileStatus fileStatus = null;
-    Path absolutePath = makeAbsolute(file);
-    try {
-      fileStatus = getFileStatus(absolutePath);
-    } catch (FileNotFoundException e) {
-      //the file isn't there.
-    }
-
-    if (fileStatus != null) {
-      //the path exists -action depends on whether or not it is a directory,
-      //and what the overwrite policy is.
-
-      //What is clear at this point is that if the entry exists, there's
-      //no need to bother creating any parent entries
-      if (fileStatus.isDirectory()) {
-        //here someone is trying to create a file over a directory
-
-/*    we can't throw an exception here as there is no easy way to distinguish
-     a file from the dir
-
-        throw new SwiftPathExistsException("Cannot create a file over a directory:"
-                                           + file);
- */
-        if (LOG.isDebugEnabled()) {
-          LOG.debug("Overwriting either an empty file or a directory");
-        }
-      }
-      if (overwrite) {
-        //overwrite set -> delete the object.
-        store.delete(absolutePath, true);
-      } else {
-        throw new FileAlreadyExistsException("Path exists: " + file);
-      }
-    } else {
-      // destination does not exist -trigger creation of the parent
-      Path parent = file.getParent();
-      if (parent != null) {
-        if (!mkdirs(parent)) {
-          throw new SwiftOperationFailedException(
-            "Mkdirs failed to create " + parent);
-        }
-      }
-    }
-
-    SwiftNativeOutputStream out = createSwiftOutputStream(file);
-    return new FSDataOutputStream(out, statistics);
-  }
-
-  /**
-   * Create the swift output stream
-   * @param path path to write to
-   * @return the new file
-   * @throws IOException
-   */
-  protected SwiftNativeOutputStream createSwiftOutputStream(Path path) throws
-                                                                       IOException {
-    long partSizeKB = getStore().getPartsizeKB();
-    return new SwiftNativeOutputStream(getConf(),
-            getStore(),
-            path.toUri().toString(),
-            partSizeKB);
-  }
-
-  /**
-   * Opens an FSDataInputStream at the indicated Path.
-   *
-   * @param path       the file name to open
-   * @param bufferSize the size of the buffer to be used.
-   * @return the input stream
-   * @throws FileNotFoundException if the file is not found
-   * @throws IOException any IO problem
-   */
-  @Override
-  public FSDataInputStream open(Path path, int bufferSize) throws IOException {
-    int bufferSizeKB = getStore().getBufferSizeKB();
-    long readBlockSize = bufferSizeKB * 1024L;
-    return open(path, bufferSize, readBlockSize);
-  }
-
-  /**
-   * Low-level operation to also set the block size for this operation
-   * @param path       the file name to open
-   * @param bufferSize the size of the buffer to be used.
-   * @param readBlockSize how big should the read block/buffer size be?
-   * @return the input stream
-   * @throws FileNotFoundException if the file is not found
-   * @throws IOException any IO problem
-   */
-  public FSDataInputStream open(Path path,
-                                int bufferSize,
-                                long readBlockSize) throws IOException {
-    if (readBlockSize <= 0) {
-      throw new SwiftConfigurationException("Bad remote buffer size");
-    }
-    Path absolutePath = makeAbsolute(path);
-    return new FSDataInputStream(
-            new StrictBufferedFSInputStream(
-                    new SwiftNativeInputStream(store,
-                                       statistics,
-                                       absolutePath,
-                                       readBlockSize),
-                    bufferSize));
-  }
-
-  /**
-   * Renames Path src to Path dst. On swift this uses copy-and-delete
-   * and <i>is not atomic</i>.
-   *
-   * @param src path
-   * @param dst path
-   * @return true if directory renamed, false otherwise
-   * @throws IOException on problems
-   */
-  @Override
-  public boolean rename(Path src, Path dst) throws IOException {
-
-    try {
-      store.rename(makeAbsolute(src), makeAbsolute(dst));
-      //success
-      return true;
-    } catch (SwiftOperationFailedException
-        | FileAlreadyExistsException
-        | FileNotFoundException
-        | ParentNotDirectoryException e) {
-      //downgrade to a failure
-      LOG.debug("rename({}, {}) failed",src, dst, e);
-      return false;
-    }
-  }
-
-
-  /**
-   * Delete a file or directory
-   *
-   * @param path      the path to delete.
-   * @param recursive if path is a directory and set to
-   *                  true, the directory is deleted else throws an exception if the
-   *                  directory is not empty
-   *                  case of a file the recursive can be set to either true or false.
-   * @return true if the object was deleted
-   * @throws IOException IO problems
-   */
-  @Override
-  public boolean delete(Path path, boolean recursive) throws IOException {
-    try {
-      return store.delete(path, recursive);
-    } catch (FileNotFoundException e) {
-      //base path was not found.
-      return false;
-    }
-  }
-
-  /**
-   * Delete a file.
-   * This method is abstract in Hadoop 1.x; in 2.x+ it is non-abstract
-   * and deprecated
-   */
-  @Override
-  public boolean delete(Path f) throws IOException {
-    return delete(f, true);
-  }
-
-  /**
-   * Makes path absolute
-   *
-   * @param path path to file
-   * @return absolute path
-   */
-  protected Path makeAbsolute(Path path) {
-    if (path.isAbsolute()) {
-      return path;
-    }
-    return new Path(workingDir, path);
-  }
-
-  /**
-   * Get the current operation statistics
-   * @return a snapshot of the statistics
-   */
-  public List<DurationStats> getOperationStatistics() {
-    return store.getOperationStatistics();
-  }
-
-  /**
-   * Low level method to do a deep listing of all entries, not stopping
-   * at the next directory entry. This is to let tests be confident that
-   * recursive deletes really are working.
-   * @param path path to recurse down
-   * @param newest ask for the newest data, potentially slower than not.
-   * @return a potentially empty array of file status
-   * @throws IOException any problem
-   */
-  @InterfaceAudience.Private
-  public FileStatus[] listRawFileStatus(Path path, boolean newest) throws IOException {
-    return store.listSubPaths(makeAbsolute(path), true, newest);
-  }
-
-  /**
-   * Get the number of partitions written by an output stream
-   * This is for testing
-   * @param outputStream output stream
-   * @return the #of partitions written by that stream
-   */
-  @InterfaceAudience.Private
-  public static int getPartitionsWritten(FSDataOutputStream outputStream) {
-    SwiftNativeOutputStream snos = getSwiftNativeOutputStream(outputStream);
-    return snos.getPartitionsWritten();
-  }
-
-  private static SwiftNativeOutputStream getSwiftNativeOutputStream(
-    FSDataOutputStream outputStream) {
-    OutputStream wrappedStream = outputStream.getWrappedStream();
-    return (SwiftNativeOutputStream) wrappedStream;
-  }
-
-  /**
-   * Get the size of partitions written by an output stream
-   * This is for testing
-   *
-   * @param outputStream output stream
-   * @return partition size in bytes
-   */
-  @InterfaceAudience.Private
-  public static long getPartitionSize(FSDataOutputStream outputStream) {
-    SwiftNativeOutputStream snos = getSwiftNativeOutputStream(outputStream);
-    return snos.getFilePartSize();
-  }
-
-  /**
-   * Get the the number of bytes written to an output stream
-   * This is for testing
-   *
-   * @param outputStream output stream
-   * @return partition size in bytes
-   */
-  @InterfaceAudience.Private
-  public static long getBytesWritten(FSDataOutputStream outputStream) {
-    SwiftNativeOutputStream snos = getSwiftNativeOutputStream(outputStream);
-    return snos.getBytesWritten();
-  }
-
-  /**
-   * Get the the number of bytes uploaded by an output stream
-   * to the swift cluster.
-   * This is for testing
-   *
-   * @param outputStream output stream
-   * @return partition size in bytes
-   */
-  @InterfaceAudience.Private
-  public static long getBytesUploaded(FSDataOutputStream outputStream) {
-    SwiftNativeOutputStream snos = getSwiftNativeOutputStream(outputStream);
-    return snos.getBytesUploaded();
-  }
-
-  /**
-   * {@inheritDoc}
-   * @throws FileNotFoundException if the parent directory is not present -or
-   * is not a directory.
-   */
-  @Override
-  public FSDataOutputStream createNonRecursive(Path path,
-      FsPermission permission,
-      EnumSet<CreateFlag> flags,
-      int bufferSize,
-      short replication,
-      long blockSize,
-      Progressable progress) throws IOException {
-    Path parent = path.getParent();
-    if (parent != null) {
-      // expect this to raise an exception if there is no parent
-      if (!getFileStatus(parent).isDirectory()) {
-        throw new FileAlreadyExistsException("Not a directory: " + parent);
-      }
-    }
-    return create(path, permission,
-        flags.contains(CreateFlag.OVERWRITE), bufferSize,
-        replication, blockSize, progress);
-  }
-
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystemStore.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystemStore.java
deleted file mode 100644
index 5e480090092..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystemStore.java
+++ /dev/null
@@ -1,986 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.fs.swift.snative;
-
-import com.fasterxml.jackson.databind.type.CollectionType;
-
-import org.apache.http.Header;
-import org.apache.http.HttpStatus;
-import org.apache.http.message.BasicHeader;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileAlreadyExistsException;
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.ParentNotDirectoryException;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException;
-import org.apache.hadoop.fs.swift.exceptions.SwiftException;
-import org.apache.hadoop.fs.swift.exceptions.SwiftInvalidResponseException;
-import org.apache.hadoop.fs.swift.exceptions.SwiftOperationFailedException;
-import org.apache.hadoop.fs.swift.http.HttpBodyContent;
-import org.apache.hadoop.fs.swift.http.SwiftProtocolConstants;
-import org.apache.hadoop.fs.swift.http.SwiftRestClient;
-import org.apache.hadoop.fs.swift.util.DurationStats;
-import org.apache.hadoop.fs.swift.util.JSONUtil;
-import org.apache.hadoop.fs.swift.util.SwiftObjectPath;
-import org.apache.hadoop.fs.swift.util.SwiftUtils;
-
-import java.io.ByteArrayInputStream;
-import java.io.FileNotFoundException;
-import java.io.IOException;
-import java.io.InputStream;
-import java.io.InterruptedIOException;
-import java.net.URI;
-import java.net.URISyntaxException;
-import java.nio.charset.Charset;
-import java.text.ParseException;
-import java.text.SimpleDateFormat;
-import java.util.ArrayList;
-import java.util.Collection;
-import java.util.Collections;
-import java.util.LinkedList;
-import java.util.List;
-import java.util.regex.Matcher;
-import java.util.regex.Pattern;
-
-/**
- * File system store implementation.
- * Makes REST requests, parses data from responses
- */
-public class SwiftNativeFileSystemStore {
-  private static final Pattern URI_PATTERN = Pattern.compile("\"\\S+?\"");
-  private static final String PATTERN = "EEE, d MMM yyyy hh:mm:ss zzz";
-  private static final Logger LOG =
-      LoggerFactory.getLogger(SwiftNativeFileSystemStore.class);
-  private URI uri;
-  private SwiftRestClient swiftRestClient;
-
-  /**
-   * Initalize the filesystem store -this creates the REST client binding.
-   *
-   * @param fsURI         URI of the filesystem, which is used to map to the filesystem-specific
-   *                      options in the configuration file
-   * @param configuration configuration
-   * @throws IOException on any failure.
-   */
-  public void initialize(URI fsURI, Configuration configuration) throws IOException {
-    this.uri = fsURI;
-    this.swiftRestClient = SwiftRestClient.getInstance(fsURI, configuration);
-  }
-
-  @Override
-  public String toString() {
-    return "SwiftNativeFileSystemStore with "
-            + swiftRestClient;
-  }
-
-  /**
-   * Get the default blocksize of this (bound) filesystem
-   * @return the blocksize returned for all FileStatus queries,
-   * which is used by the MapReduce splitter.
-   */
-  public long getBlocksize() {
-    return 1024L * swiftRestClient.getBlocksizeKB();
-  }
-
-  public long getPartsizeKB() {
-    return swiftRestClient.getPartSizeKB();
-  }
-
-  public int getBufferSizeKB() {
-    return swiftRestClient.getBufferSizeKB();
-  }
-
-  public int getThrottleDelay() {
-    return swiftRestClient.getThrottleDelay();
-  }
-  /**
-   * Upload a file/input stream of a specific length.
-   *
-   * @param path        destination path in the swift filesystem
-   * @param inputStream input data. This is closed afterwards, always
-   * @param length      length of the data
-   * @throws IOException on a problem
-   */
-  public void uploadFile(Path path, InputStream inputStream, long length)
-          throws IOException {
-      swiftRestClient.upload(toObjectPath(path), inputStream, length);
-  }
-
-  /**
-   * Upload part of a larger file.
-   *
-   * @param path        destination path
-   * @param partNumber  item number in the path
-   * @param inputStream input data
-   * @param length      length of the data
-   * @throws IOException on a problem
-   */
-  public void uploadFilePart(Path path, int partNumber,
-                             InputStream inputStream, long length)
-          throws IOException {
-
-    String stringPath = path.toUri().toString();
-    String partitionFilename = SwiftUtils.partitionFilenameFromNumber(
-      partNumber);
-    if (stringPath.endsWith("/")) {
-      stringPath = stringPath.concat(partitionFilename);
-    } else {
-      stringPath = stringPath.concat("/").concat(partitionFilename);
-    }
-
-    swiftRestClient.upload(
-      new SwiftObjectPath(toDirPath(path).getContainer(), stringPath),
-            inputStream,
-            length);
-  }
-
-  /**
-   * Tell the Swift server to expect a multi-part upload by submitting
-   * a 0-byte file with the X-Object-Manifest header
-   *
-   * @param path path of final final
-   * @throws IOException
-   */
-  public void createManifestForPartUpload(Path path) throws IOException {
-    String pathString = toObjectPath(path).toString();
-    if (!pathString.endsWith("/")) {
-      pathString = pathString.concat("/");
-    }
-    if (pathString.startsWith("/")) {
-      pathString = pathString.substring(1);
-    }
-
-    swiftRestClient.upload(toObjectPath(path),
-        new ByteArrayInputStream(new byte[0]),
-        0,
-        new BasicHeader(SwiftProtocolConstants.X_OBJECT_MANIFEST, pathString));
-  }
-
-  /**
-   * Get the metadata of an object
-   *
-   * @param path path
-   * @return file metadata. -or null if no headers were received back from the server.
-   * @throws IOException           on a problem
-   * @throws FileNotFoundException if there is nothing at the end
-   */
-  public SwiftFileStatus getObjectMetadata(Path path) throws IOException {
-    return getObjectMetadata(path, true);
-  }
-
-  /**
-   * Get the HTTP headers, in case you really need the low-level
-   * metadata
-   * @param path path to probe
-   * @param newest newest or oldest?
-   * @return the header list
-   * @throws IOException IO problem
-   * @throws FileNotFoundException if there is nothing at the end
-   */
-  public Header[] getObjectHeaders(Path path, boolean newest)
-    throws IOException, FileNotFoundException {
-    SwiftObjectPath objectPath = toObjectPath(path);
-    return stat(objectPath, newest);
-  }
-
-  /**
-   * Get the metadata of an object
-   *
-   * @param path path
-   * @param newest flag to say "set the newest header", otherwise take any entry
-   * @return file metadata. -or null if no headers were received back from the server.
-   * @throws IOException           on a problem
-   * @throws FileNotFoundException if there is nothing at the end
-   */
-  public SwiftFileStatus getObjectMetadata(Path path, boolean newest)
-    throws IOException, FileNotFoundException {
-
-    SwiftObjectPath objectPath = toObjectPath(path);
-    final Header[] headers = stat(objectPath, newest);
-    //no headers is treated as a missing file
-    if (headers.length == 0) {
-      throw new FileNotFoundException("Not Found " + path.toUri());
-    }
-
-    boolean isDir = false;
-    long length = 0;
-    long lastModified = 0 ;
-    for (Header header : headers) {
-      String headerName = header.getName();
-      if (headerName.equals(SwiftProtocolConstants.X_CONTAINER_OBJECT_COUNT) ||
-              headerName.equals(SwiftProtocolConstants.X_CONTAINER_BYTES_USED)) {
-        length = 0;
-        isDir = true;
-      }
-      if (SwiftProtocolConstants.HEADER_CONTENT_LENGTH.equals(headerName)) {
-        length = Long.parseLong(header.getValue());
-      }
-      if (SwiftProtocolConstants.HEADER_LAST_MODIFIED.equals(headerName)) {
-        final SimpleDateFormat simpleDateFormat = new SimpleDateFormat(PATTERN);
-        try {
-          lastModified = simpleDateFormat.parse(header.getValue()).getTime();
-        } catch (ParseException e) {
-          throw new SwiftException("Failed to parse " + header.toString(), e);
-        }
-      }
-    }
-    if (lastModified == 0) {
-      lastModified = System.currentTimeMillis();
-    }
-
-    Path correctSwiftPath = getCorrectSwiftPath(path);
-    return new SwiftFileStatus(length,
-                               isDir,
-                               1,
-                               getBlocksize(),
-                               lastModified,
-                               correctSwiftPath);
-  }
-
-  private Header[] stat(SwiftObjectPath objectPath, boolean newest) throws
-                                                                    IOException {
-    Header[] headers;
-    if (newest) {
-      headers = swiftRestClient.headRequest("getObjectMetadata-newest",
-                                            objectPath, SwiftRestClient.NEWEST);
-    } else {
-      headers = swiftRestClient.headRequest("getObjectMetadata",
-                                            objectPath);
-    }
-    return headers;
-  }
-
-  /**
-   * Get the object as an input stream
-   *
-   * @param path object path
-   * @return the input stream -this must be closed to terminate the connection
-   * @throws IOException           IO problems
-   * @throws FileNotFoundException path doesn't resolve to an object
-   */
-  public HttpBodyContent getObject(Path path) throws IOException {
-    return swiftRestClient.getData(toObjectPath(path),
-                                   SwiftRestClient.NEWEST);
-  }
-
-  /**
-   * Get the input stream starting from a specific point.
-   *
-   * @param path           path to object
-   * @param byteRangeStart starting point
-   * @param length         no. of bytes
-   * @return an input stream that must be closed
-   * @throws IOException IO problems
-   */
-  public HttpBodyContent getObject(Path path, long byteRangeStart, long length)
-          throws IOException {
-    return swiftRestClient.getData(
-      toObjectPath(path), byteRangeStart, length);
-  }
-
-  /**
-   * List a directory.
-   * This is O(n) for the number of objects in this path.
-   *
-   *
-   *
-   * @param path working path
-   * @param listDeep ask for all the data
-   * @param newest ask for the newest data
-   * @return Collection of file statuses
-   * @throws IOException IO problems
-   * @throws FileNotFoundException if the path does not exist
-   */
-  private List<FileStatus> listDirectory(SwiftObjectPath path,
-                                         boolean listDeep,
-                                         boolean newest) throws IOException {
-    final byte[] bytes;
-    final ArrayList<FileStatus> files = new ArrayList<FileStatus>();
-    final Path correctSwiftPath = getCorrectSwiftPath(path);
-    try {
-      bytes = swiftRestClient.listDeepObjectsInDirectory(path, listDeep);
-    } catch (FileNotFoundException e) {
-      if (LOG.isDebugEnabled()) {
-        LOG.debug("" +
-                "File/Directory not found " + path);
-      }
-      if (SwiftUtils.isRootDir(path)) {
-        return Collections.emptyList();
-      } else {
-        throw e;
-      }
-    } catch (SwiftInvalidResponseException e) {
-      //bad HTTP error code
-      if (e.getStatusCode() == HttpStatus.SC_NO_CONTENT) {
-        //this can come back on a root list if the container is empty
-        if (SwiftUtils.isRootDir(path)) {
-          return Collections.emptyList();
-        } else {
-          //NO_CONTENT returned on something other than the root directory;
-          //see if it is there, and convert to empty list or not found
-          //depending on whether the entry exists.
-          FileStatus stat = getObjectMetadata(correctSwiftPath, newest);
-
-          if (stat.isDirectory()) {
-            //it's an empty directory. state that
-            return Collections.emptyList();
-          } else {
-            //it's a file -return that as the status
-            files.add(stat);
-            return files;
-          }
-        }
-      } else {
-        //a different status code: rethrow immediately
-        throw e;
-      }
-    }
-
-    final CollectionType collectionType = JSONUtil.getJsonMapper().getTypeFactory().
-            constructCollectionType(List.class, SwiftObjectFileStatus.class);
-
-    final List<SwiftObjectFileStatus> fileStatusList = JSONUtil.toObject(
-        new String(bytes, Charset.forName("UTF-8")), collectionType);
-
-    //this can happen if user lists file /data/files/file
-    //in this case swift will return empty array
-    if (fileStatusList.isEmpty()) {
-      SwiftFileStatus objectMetadata = getObjectMetadata(correctSwiftPath,
-                                                         newest);
-      if (objectMetadata.isFile()) {
-        files.add(objectMetadata);
-      }
-
-      return files;
-    }
-
-    for (SwiftObjectFileStatus status : fileStatusList) {
-      if (status.getName() != null) {
-          files.add(new SwiftFileStatus(status.getBytes(),
-                  status.getBytes() == 0,
-                  1,
-                  getBlocksize(),
-                  status.getLast_modified().getTime(),
-                  getCorrectSwiftPath(new Path(status.getName()))));
-      }
-    }
-
-    return files;
-  }
-
-  /**
-   * List all elements in this directory
-   *
-   *
-   *
-   * @param path     path to work with
-   * @param recursive do a recursive get
-   * @param newest ask for the newest, or can some out of date data work?
-   * @return the file statuses, or an empty array if there are no children
-   * @throws IOException           on IO problems
-   * @throws FileNotFoundException if the path is nonexistent
-   */
-  public FileStatus[] listSubPaths(Path path,
-                                   boolean recursive,
-                                   boolean newest) throws IOException {
-    final Collection<FileStatus> fileStatuses;
-    fileStatuses = listDirectory(toDirPath(path), recursive, newest);
-    return fileStatuses.toArray(new FileStatus[fileStatuses.size()]);
-  }
-
-  /**
-   * Create a directory
-   *
-   * @param path path
-   * @throws IOException
-   */
-  public void createDirectory(Path path) throws IOException {
-    innerCreateDirectory(toDirPath(path));
-  }
-
-  /**
-   * The inner directory creation option. This only creates
-   * the dir at the given path, not any parent dirs.
-   * @param swiftObjectPath swift object path at which a 0-byte blob should be
-   * put
-   * @throws IOException IO problems
-   */
-  private void innerCreateDirectory(SwiftObjectPath swiftObjectPath)
-          throws IOException {
-
-    swiftRestClient.putRequest(swiftObjectPath);
-  }
-
-  private SwiftObjectPath toDirPath(Path path) throws
-          SwiftConfigurationException {
-    return SwiftObjectPath.fromPath(uri, path, false);
-  }
-
-  private SwiftObjectPath toObjectPath(Path path) throws
-          SwiftConfigurationException {
-    return SwiftObjectPath.fromPath(uri, path);
-  }
-
-  /**
-   * Try to find the specific server(s) on which the data lives
-   * @param path path to probe
-   * @return a possibly empty list of locations
-   * @throws IOException on problems determining the locations
-   */
-  public List<URI> getObjectLocation(Path path) throws IOException {
-    final byte[] objectLocation;
-    objectLocation = swiftRestClient.getObjectLocation(toObjectPath(path));
-    if (objectLocation == null || objectLocation.length == 0) {
-      //no object location, return an empty list
-      return new LinkedList<URI>();
-    }
-    return extractUris(new String(objectLocation, Charset.forName("UTF-8")), path);
-  }
-
-  /**
-   * deletes object from Swift
-   *
-   * @param path path to delete
-   * @return true if the path was deleted by this specific operation.
-   * @throws IOException on a failure
-   */
-  public boolean deleteObject(Path path) throws IOException {
-    SwiftObjectPath swiftObjectPath = toObjectPath(path);
-    if (!SwiftUtils.isRootDir(swiftObjectPath)) {
-      return swiftRestClient.delete(swiftObjectPath);
-    } else {
-      if (LOG.isDebugEnabled()) {
-        LOG.debug("Not deleting root directory entry");
-      }
-      return true;
-    }
-  }
-
-  /**
-   * deletes a directory from Swift. This is not recursive
-   *
-   * @param path path to delete
-   * @return true if the path was deleted by this specific operation -or
-   *         the path was root and not acted on.
-   * @throws IOException on a failure
-   */
-  public boolean rmdir(Path path) throws IOException {
-    return deleteObject(path);
-  }
-
-  /**
-   * Does the object exist
-   *
-   * @param path object path
-   * @return true if the metadata of an object could be retrieved
-   * @throws IOException IO problems other than FileNotFound, which
-   *                     is downgraded to an object does not exist return code
-   */
-  public boolean objectExists(Path path) throws IOException {
-    return objectExists(toObjectPath(path));
-  }
-
-  /**
-   * Does the object exist
-   *
-   * @param path swift object path
-   * @return true if the metadata of an object could be retrieved
-   * @throws IOException IO problems other than FileNotFound, which
-   *                     is downgraded to an object does not exist return code
-   */
-  public boolean objectExists(SwiftObjectPath path) throws IOException {
-    try {
-      Header[] headers = swiftRestClient.headRequest("objectExists",
-                                                     path,
-                                                     SwiftRestClient.NEWEST);
-      //no headers is treated as a missing file
-      return headers.length != 0;
-    } catch (FileNotFoundException e) {
-      return false;
-    }
-  }
-
-  /**
-   * Rename through copy-and-delete. this is a consequence of the
-   * Swift filesystem using the path as the hash
-   * into the Distributed Hash Table, "the ring" of filenames.
-   * <p>
-   * Because of the nature of the operation, it is not atomic.
-   *
-   * @param src source file/dir
-   * @param dst destination
-   * @throws IOException                   IO failure
-   * @throws SwiftOperationFailedException if the rename failed
-   * @throws FileNotFoundException         if the source directory is missing, or
-   *                                       the parent directory of the destination
-   */
-  public void rename(Path src, Path dst)
-    throws FileNotFoundException, SwiftOperationFailedException, IOException {
-    if (LOG.isDebugEnabled()) {
-      LOG.debug("mv " + src + " " + dst);
-    }
-    boolean renamingOnToSelf = src.equals(dst);
-
-    SwiftObjectPath srcObject = toObjectPath(src);
-    SwiftObjectPath destObject = toObjectPath(dst);
-
-    if (SwiftUtils.isRootDir(srcObject)) {
-      throw new SwiftOperationFailedException("cannot rename root dir");
-    }
-
-    final SwiftFileStatus srcMetadata;
-    srcMetadata = getObjectMetadata(src);
-    SwiftFileStatus dstMetadata;
-    try {
-      dstMetadata = getObjectMetadata(dst);
-    } catch (FileNotFoundException e) {
-      //destination does not exist.
-      LOG.debug("Destination does not exist");
-      dstMetadata = null;
-    }
-
-    //check to see if the destination parent directory exists
-    Path srcParent = src.getParent();
-    Path dstParent = dst.getParent();
-    //skip the overhead of a HEAD call if the src and dest share the same
-    //parent dir (in which case the dest dir exists), or the destination
-    //directory is root, in which case it must also exist
-    if (dstParent != null && !dstParent.equals(srcParent)) {
-      SwiftFileStatus fileStatus;
-      try {
-        fileStatus = getObjectMetadata(dstParent);
-      } catch (FileNotFoundException e) {
-        //destination parent doesn't exist; bail out
-        LOG.debug("destination parent directory " + dstParent + " doesn't exist");
-        throw e;
-      }
-      if (!fileStatus.isDir()) {
-        throw new ParentNotDirectoryException(dstParent.toString());
-      }
-    }
-
-    boolean destExists = dstMetadata != null;
-    boolean destIsDir = destExists && SwiftUtils.isDirectory(dstMetadata);
-    //calculate the destination
-    SwiftObjectPath destPath;
-
-    //enum the child entries and everything underneath
-    List<FileStatus> childStats = listDirectory(srcObject, true, true);
-    boolean srcIsFile = !srcMetadata.isDirectory();
-    if (srcIsFile) {
-
-      //source is a simple file OR a partitioned file
-      // outcomes:
-      // #1 dest exists and is file: fail
-      // #2 dest exists and is dir: destination path becomes under dest dir
-      // #3 dest does not exist: use dest as name
-      if (destExists) {
-
-        if (destIsDir) {
-          //outcome #2 -move to subdir of dest
-          destPath = toObjectPath(new Path(dst, src.getName()));
-        } else {
-          //outcome #1 dest it's a file: fail if different
-          if (!renamingOnToSelf) {
-            throw new FileAlreadyExistsException(
-                    "cannot rename a file over one that already exists");
-          } else {
-            //is mv self self where self is a file. this becomes a no-op
-            LOG.debug("Renaming file onto self: no-op => success");
-            return;
-          }
-        }
-      } else {
-        //outcome #3 -new entry
-        destPath = toObjectPath(dst);
-      }
-      int childCount = childStats.size();
-      //here there is one of:
-      // - a single object ==> standard file
-      // ->
-      if (childCount == 0) {
-        copyThenDeleteObject(srcObject, destPath);
-      } else {
-        //do the copy
-        SwiftUtils.debug(LOG, "Source file appears to be partitioned." +
-                              " copying file and deleting children");
-
-        copyObject(srcObject, destPath);
-        for (FileStatus stat : childStats) {
-          SwiftUtils.debug(LOG, "Deleting partitioned file %s ", stat);
-          deleteObject(stat.getPath());
-        }
-
-        swiftRestClient.delete(srcObject);
-      }
-    } else {
-
-      //here the source exists and is a directory
-      // outcomes (given we know the parent dir exists if we get this far)
-      // #1 destination is a file: fail
-      // #2 destination is a directory: create a new dir under that one
-      // #3 destination doesn't exist: create a new dir with that name
-      // #3 and #4 are only allowed if the dest path is not == or under src
-
-
-      if (destExists && !destIsDir) {
-        // #1 destination is a file: fail
-        throw new FileAlreadyExistsException(
-                "the source is a directory, but not the destination");
-      }
-      Path targetPath;
-      if (destExists) {
-        // #2 destination is a directory: create a new dir under that one
-        targetPath = new Path(dst, src.getName());
-      } else {
-        // #3 destination doesn't exist: create a new dir with that name
-        targetPath = dst;
-      }
-      SwiftObjectPath targetObjectPath = toObjectPath(targetPath);
-      //final check for any recursive operations
-      if (srcObject.isEqualToOrParentOf(targetObjectPath)) {
-        //you can't rename a directory onto itself
-        throw new SwiftOperationFailedException(
-          "cannot move a directory under itself");
-      }
-
-
-      LOG.info("mv  " + srcObject + " " + targetPath);
-
-      logDirectory("Directory to copy ", srcObject, childStats);
-
-      // iterative copy of everything under the directory.
-      // by listing all children this can be done iteratively
-      // rather than recursively -everything in this list is either a file
-      // or a 0-byte-len file pretending to be a directory.
-      String srcURI = src.toUri().toString();
-      int prefixStripCount = srcURI.length() + 1;
-      for (FileStatus fileStatus : childStats) {
-        Path copySourcePath = fileStatus.getPath();
-        String copySourceURI = copySourcePath.toUri().toString();
-
-        String copyDestSubPath = copySourceURI.substring(prefixStripCount);
-
-        Path copyDestPath = new Path(targetPath, copyDestSubPath);
-        if (LOG.isTraceEnabled()) {
-          //trace to debug some low-level rename path problems; retained
-          //in case they ever come back.
-          LOG.trace("srcURI=" + srcURI
-                  + "; copySourceURI=" + copySourceURI
-                  + "; copyDestSubPath=" + copyDestSubPath
-                  + "; copyDestPath=" + copyDestPath);
-        }
-        SwiftObjectPath copyDestination = toObjectPath(copyDestPath);
-
-        try {
-          copyThenDeleteObject(toObjectPath(copySourcePath),
-                  copyDestination);
-        } catch (FileNotFoundException e) {
-          LOG.info("Skipping rename of " + copySourcePath);
-        }
-        //add a throttle delay
-        throttle();
-      }
-      //now rename self. If missing, create the dest directory and warn
-      if (!SwiftUtils.isRootDir(srcObject)) {
-        try {
-          copyThenDeleteObject(srcObject,
-                  targetObjectPath);
-        } catch (FileNotFoundException e) {
-          //create the destination directory
-          LOG.warn("Source directory deleted during rename", e);
-          innerCreateDirectory(destObject);
-        }
-      }
-    }
-  }
-
-  /**
-   * Debug action to dump directory statuses to the debug log
-   *
-   * @param message    explanation
-   * @param objectPath object path (can be null)
-   * @param statuses   listing output
-   */
-  private void logDirectory(String message, SwiftObjectPath objectPath,
-                            Iterable<FileStatus> statuses) {
-
-    if (LOG.isDebugEnabled()) {
-      LOG.debug(message + ": listing of " + objectPath);
-      for (FileStatus fileStatus : statuses) {
-        LOG.debug(fileStatus.getPath().toString());
-      }
-    }
-  }
-
-  public void copy(Path srcKey, Path dstKey) throws IOException {
-    SwiftObjectPath srcObject = toObjectPath(srcKey);
-    SwiftObjectPath destObject = toObjectPath(dstKey);
-    swiftRestClient.copyObject(srcObject, destObject);
-  }
-
-
-  /**
-   * Copy an object then, if the copy worked, delete it.
-   * If the copy failed, the source object is not deleted.
-   *
-   * @param srcObject  source object path
-   * @param destObject destination object path
-   * @throws IOException IO problems
-
-   */
-  private void copyThenDeleteObject(SwiftObjectPath srcObject,
-                                    SwiftObjectPath destObject) throws
-          IOException {
-
-
-    //do the copy
-    copyObject(srcObject, destObject);
-    //getting here means the copy worked
-    swiftRestClient.delete(srcObject);
-  }
-  /**
-   * Copy an object
-   * @param srcObject  source object path
-   * @param destObject destination object path
-   * @throws IOException IO problems
-   */
-  private void copyObject(SwiftObjectPath srcObject,
-                                    SwiftObjectPath destObject) throws
-          IOException {
-    if (srcObject.isEqualToOrParentOf(destObject)) {
-      throw new SwiftException(
-        "Can't copy " + srcObject + " onto " + destObject);
-    }
-    //do the copy
-    boolean copySucceeded = swiftRestClient.copyObject(srcObject, destObject);
-    if (!copySucceeded) {
-      throw new SwiftException("Copy of " + srcObject + " to "
-              + destObject + "failed");
-    }
-  }
-
-  /**
-   * Take a Hadoop path and return one which uses the URI prefix and authority
-   * of this FS. It doesn't make a relative path absolute
-   * @param path path in
-   * @return path with a URI bound to this FS
-   * @throws SwiftException URI cannot be created.
-   */
-  public Path getCorrectSwiftPath(Path path) throws
-          SwiftException {
-    try {
-      final URI fullUri = new URI(uri.getScheme(),
-              uri.getAuthority(),
-              path.toUri().getPath(),
-              null,
-              null);
-
-      return new Path(fullUri);
-    } catch (URISyntaxException e) {
-      throw new SwiftException("Specified path " + path + " is incorrect", e);
-    }
-  }
-
-  /**
-   * Builds a hadoop-Path from a swift path, inserting the URI authority
-   * of this FS instance
-   * @param path swift object path
-   * @return Hadoop path
-   * @throws SwiftException if the URI couldn't be created.
-   */
-  private Path getCorrectSwiftPath(SwiftObjectPath path) throws
-          SwiftException {
-    try {
-      final URI fullUri = new URI(uri.getScheme(),
-              uri.getAuthority(),
-              path.getObject(),
-              null,
-              null);
-
-      return new Path(fullUri);
-    } catch (URISyntaxException e) {
-      throw new SwiftException("Specified path " + path + " is incorrect", e);
-    }
-  }
-
-
-  /**
-   * extracts URIs from json
-   * @param json json to parse
-   * @param path path (used in exceptions)
-   * @return URIs
-   * @throws SwiftOperationFailedException on any problem parsing the JSON
-   */
-  public static List<URI> extractUris(String json, Path path) throws
-                                                   SwiftOperationFailedException {
-    final Matcher matcher = URI_PATTERN.matcher(json);
-    final List<URI> result = new ArrayList<URI>();
-    while (matcher.find()) {
-      final String s = matcher.group();
-      final String uri = s.substring(1, s.length() - 1);
-      try {
-        URI createdUri = URI.create(uri);
-        result.add(createdUri);
-      } catch (IllegalArgumentException e) {
-        //failure to create the URI, which means this is bad JSON. Convert
-        //to an exception with useful text
-        throw new SwiftOperationFailedException(
-          String.format(
-            "could not convert \"%s\" into a URI." +
-            " source: %s " +
-            " first JSON: %s",
-            uri, path, json.substring(0, 256)));
-      }
-    }
-    return result;
-  }
-
-  /**
-   * Insert a throttled wait if the throttle delay &gt; 0
-   * @throws InterruptedIOException if interrupted during sleep
-   */
-  public void throttle() throws InterruptedIOException {
-    int throttleDelay = getThrottleDelay();
-    if (throttleDelay > 0) {
-      try {
-        Thread.sleep(throttleDelay);
-      } catch (InterruptedException e) {
-        //convert to an IOE
-        throw (InterruptedIOException) new InterruptedIOException(e.toString())
-          .initCause(e);
-      }
-    }
-  }
-
-  /**
-   * Get the current operation statistics
-   * @return a snapshot of the statistics
-   */
-  public List<DurationStats> getOperationStatistics() {
-    return swiftRestClient.getOperationStatistics();
-  }
-
-
-  /**
-   * Delete the entire tree. This is an internal one with slightly different
-   * behavior: if an entry is missing, a {@link FileNotFoundException} is
-   * raised. This lets the caller distinguish a file not found with
-   * other reasons for failure, so handles race conditions in recursive
-   * directory deletes better.
-   * <p>
-   * The problem being addressed is: caller A requests a recursive directory
-   * of directory /dir ; caller B requests a delete of a file /dir/file,
-   * between caller A enumerating the files contents, and requesting a delete
-   * of /dir/file. We want to recognise the special case
-   * "directed file is no longer there" and not convert that into a failure
-   *
-   * @param absolutePath  the path to delete.
-   * @param recursive if path is a directory and set to
-   *                  true, the directory is deleted else throws an exception if the
-   *                  directory is not empty
-   *                  case of a file the recursive can be set to either true or false.
-   * @return true if the object was deleted
-   * @throws IOException           IO problems
-   * @throws FileNotFoundException if a file/dir being deleted is not there -
-   *                               this includes entries below the specified path, (if the path is a dir
-   *                               and recursive is true)
-   */
-  public boolean delete(Path absolutePath, boolean recursive) throws IOException {
-    Path swiftPath = getCorrectSwiftPath(absolutePath);
-    SwiftUtils.debug(LOG, "Deleting path '%s' recursive=%b",
-                     absolutePath,
-                     recursive);
-    boolean askForNewest = true;
-    SwiftFileStatus fileStatus = getObjectMetadata(swiftPath, askForNewest);
-
-    //ask for the file/dir status, but don't demand the newest, as we
-    //don't mind if the directory has changed
-    //list all entries under this directory.
-    //this will throw FileNotFoundException if the file isn't there
-    FileStatus[] statuses = listSubPaths(absolutePath, true, askForNewest);
-    if (statuses == null) {
-      //the directory went away during the non-atomic stages of the operation.
-      // Return false as it was not this thread doing the deletion.
-      SwiftUtils.debug(LOG, "Path '%s' has no status -it has 'gone away'",
-                       absolutePath,
-                       recursive);
-      return false;
-    }
-    int filecount = statuses.length;
-    SwiftUtils.debug(LOG, "Path '%s' %d status entries'",
-                     absolutePath,
-                     filecount);
-
-    if (filecount == 0) {
-      //it's an empty directory or a path
-      rmdir(absolutePath);
-      return true;
-    }
-
-    if (LOG.isDebugEnabled()) {
-      SwiftUtils.debug(LOG, "%s", SwiftUtils.fileStatsToString(statuses, "\n"));
-    }
-
-    if (filecount == 1 && swiftPath.equals(statuses[0].getPath())) {
-      // 1 entry => simple file and it is the target
-      //simple file: delete it
-      SwiftUtils.debug(LOG, "Deleting simple file %s", absolutePath);
-      deleteObject(absolutePath);
-      return true;
-    }
-
-    //>1 entry implies directory with children. Run through them,
-    // but first check for the recursive flag and reject it *unless it looks
-    // like a partitioned file (len > 0 && has children)
-    if (!fileStatus.isDirectory()) {
-      LOG.debug("Multiple child entries but entry has data: assume partitioned");
-    } else if (!recursive) {
-      //if there are children, unless this is a recursive operation, fail immediately
-      throw new SwiftOperationFailedException("Directory " + fileStatus
-                                              + " is not empty: "
-                                              + SwiftUtils.fileStatsToString(
-                                                        statuses, "; "));
-    }
-
-    //delete the entries. including ourselves.
-    for (FileStatus entryStatus : statuses) {
-      Path entryPath = entryStatus.getPath();
-      try {
-        boolean deleted = deleteObject(entryPath);
-        if (!deleted) {
-          SwiftUtils.debug(LOG, "Failed to delete entry '%s'; continuing",
-                           entryPath);
-        }
-      } catch (FileNotFoundException e) {
-        //the path went away -race conditions.
-        //do not fail, as the outcome is still OK.
-        SwiftUtils.debug(LOG, "Path '%s' is no longer present; continuing",
-                         entryPath);
-      }
-      throttle();
-    }
-    //now delete self
-    SwiftUtils.debug(LOG, "Deleting base entry %s", absolutePath);
-    deleteObject(absolutePath);
-
-    return true;
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeInputStream.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeInputStream.java
deleted file mode 100644
index bce7325c980..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeInputStream.java
+++ /dev/null
@@ -1,385 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.snative;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.apache.hadoop.fs.FSExceptionMessages;
-import org.apache.hadoop.fs.FSInputStream;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.swift.exceptions.SwiftConnectionClosedException;
-import org.apache.hadoop.fs.swift.exceptions.SwiftException;
-import org.apache.hadoop.fs.swift.http.HttpBodyContent;
-import org.apache.hadoop.fs.swift.http.HttpInputStreamWithRelease;
-import org.apache.hadoop.fs.swift.util.SwiftUtils;
-
-import java.io.EOFException;
-import java.io.IOException;
-
-/**
- * The input stream from remote Swift blobs.
- * The class attempts to be buffer aware, and react to a forward seek operation
- * by trying to scan ahead through the current block of data to find it.
- * This accelerates some operations that do a lot of seek()/read() actions,
- * including work (such as in the MR engine) that do a seek() immediately after
- * an open().
- */
-class SwiftNativeInputStream extends FSInputStream {
-
-  private static final Logger LOG =
-      LoggerFactory.getLogger(SwiftNativeInputStream.class);
-
-  /**
-   *  range requested off the server: {@value}
-   */
-  private final long bufferSize;
-
-  /**
-   * File nativeStore instance
-   */
-  private final SwiftNativeFileSystemStore nativeStore;
-
-  /**
-   * Hadoop statistics. Used to get info about number of reads, writes, etc.
-   */
-  private final FileSystem.Statistics statistics;
-
-  /**
-   * Data input stream
-   */
-  private HttpInputStreamWithRelease httpStream;
-
-  /**
-   * File path
-   */
-  private final Path path;
-
-  /**
-   * Current position
-   */
-  private long pos = 0;
-
-  /**
-   * Length of the file picked up at start time
-   */
-  private long contentLength = -1;
-
-  /**
-   * Why the stream is closed
-   */
-  private String reasonClosed = "unopened";
-
-  /**
-   * Offset in the range requested last
-   */
-  private long rangeOffset = 0;
-
-  public SwiftNativeInputStream(SwiftNativeFileSystemStore storeNative,
-      FileSystem.Statistics statistics, Path path, long bufferSize)
-          throws IOException {
-    this.nativeStore = storeNative;
-    this.statistics = statistics;
-    this.path = path;
-    if (bufferSize <= 0) {
-      throw new IllegalArgumentException("Invalid buffer size");
-    }
-    this.bufferSize = bufferSize;
-    //initial buffer fill
-    this.httpStream = storeNative.getObject(path).getInputStream();
-    //fillBuffer(0);
-  }
-
-  /**
-   * Move to a new position within the file relative to where the pointer is now.
-   * Always call from a synchronized clause
-   * @param offset offset
-   */
-  private synchronized void incPos(int offset) {
-    pos += offset;
-    rangeOffset += offset;
-    SwiftUtils.trace(LOG, "Inc: pos=%d bufferOffset=%d", pos, rangeOffset);
-  }
-
-  /**
-   * Update the start of the buffer; always call from a sync'd clause
-   * @param seekPos position sought.
-   * @param contentLength content length provided by response (may be -1)
-   */
-  private synchronized void updateStartOfBufferPosition(long seekPos,
-                                                        long contentLength) {
-    //reset the seek pointer
-    pos = seekPos;
-    //and put the buffer offset to 0
-    rangeOffset = 0;
-    this.contentLength = contentLength;
-    SwiftUtils.trace(LOG, "Move: pos=%d; bufferOffset=%d; contentLength=%d",
-                     pos,
-                     rangeOffset,
-                     contentLength);
-  }
-
-  @Override
-  public synchronized int read() throws IOException {
-    verifyOpen();
-    int result = -1;
-    try {
-      result = httpStream.read();
-    } catch (IOException e) {
-      String msg = "IOException while reading " + path
-                   + ": " +e + ", attempting to reopen.";
-      LOG.debug(msg, e);
-      if (reopenBuffer()) {
-        result = httpStream.read();
-      }
-    }
-    if (result != -1) {
-      incPos(1);
-    }
-    if (statistics != null && result != -1) {
-      statistics.incrementBytesRead(1);
-    }
-    return result;
-  }
-
-  @Override
-  public synchronized int read(byte[] b, int off, int len) throws IOException {
-    SwiftUtils.debug(LOG, "read(buffer, %d, %d)", off, len);
-    SwiftUtils.validateReadArgs(b, off, len);
-    if (len == 0) {
-      return 0;
-    }
-    int result = -1;
-    try {
-      verifyOpen();
-      result = httpStream.read(b, off, len);
-    } catch (IOException e) {
-      //other IO problems are viewed as transient and re-attempted
-      LOG.info("Received IOException while reading '" + path +
-               "', attempting to reopen: " + e);
-      LOG.debug("IOE on read()" + e, e);
-      if (reopenBuffer()) {
-        result = httpStream.read(b, off, len);
-      }
-    }
-    if (result > 0) {
-      incPos(result);
-      if (statistics != null) {
-        statistics.incrementBytesRead(result);
-      }
-    }
-
-    return result;
-  }
-
-  /**
-   * Re-open the buffer
-   * @return true iff more data could be added to the buffer
-   * @throws IOException if not
-   */
-  private boolean reopenBuffer() throws IOException {
-    innerClose("reopening buffer to trigger refresh");
-    boolean success = false;
-    try {
-      fillBuffer(pos);
-      success =  true;
-    } catch (EOFException eof) {
-      //the EOF has been reached
-      this.reasonClosed = "End of file";
-    }
-    return success;
-  }
-
-  /**
-   * close the stream. After this the stream is not usable -unless and until
-   * it is re-opened (which can happen on some of the buffer ops)
-   * This method is thread-safe and idempotent.
-   *
-   * @throws IOException on IO problems.
-   */
-  @Override
-  public synchronized void close() throws IOException {
-    innerClose("closed");
-  }
-
-  private void innerClose(String reason) throws IOException {
-    try {
-      if (httpStream != null) {
-        reasonClosed = reason;
-        if (LOG.isDebugEnabled()) {
-          LOG.debug("Closing HTTP input stream : " + reason);
-        }
-        httpStream.close();
-      }
-    } finally {
-      httpStream = null;
-    }
-  }
-
-  /**
-   * Assume that the connection is not closed: throws an exception if it is
-   * @throws SwiftConnectionClosedException
-   */
-  private void verifyOpen() throws SwiftConnectionClosedException {
-    if (httpStream == null) {
-      throw new SwiftConnectionClosedException(reasonClosed);
-    }
-  }
-
-  @Override
-  public synchronized String toString() {
-    return "SwiftNativeInputStream" +
-           " position=" + pos
-           + " buffer size = " + bufferSize
-           + " "
-           + (httpStream != null ? httpStream.toString()
-                                 : (" no input stream: " + reasonClosed));
-  }
-
-  /**
-   * Treats any finalize() call without the input stream being closed
-   * as a serious problem, logging at error level
-   * @throws Throwable n/a
-   */
-  @Override
-  protected void finalize() throws Throwable {
-    if (httpStream != null) {
-      LOG.error(
-        "Input stream is leaking handles by not being closed() properly: "
-        + httpStream.toString());
-    }
-  }
-
-  /**
-   * Read through the specified number of bytes.
-   * The implementation iterates a byte a time, which may seem inefficient
-   * compared to the read(bytes[]) method offered by input streams.
-   * However, if you look at the code that implements that method, it comes
-   * down to read() one char at a time -only here the return value is discarded.
-   *
-   *<p/>
-   * This is a no-op if the stream is closed
-   * @param bytes number of bytes to read.
-   * @throws IOException IO problems
-   * @throws SwiftException if a read returned -1.
-   */
-  private int chompBytes(long bytes) throws IOException {
-    int count = 0;
-    if (httpStream != null) {
-      int result;
-      for (long i = 0; i < bytes; i++) {
-        result = httpStream.read();
-        if (result < 0) {
-          throw new SwiftException("Received error code while chomping input");
-        }
-        count ++;
-        incPos(1);
-      }
-    }
-    return count;
-  }
-
-  /**
-   * Seek to an offset. If the data is already in the buffer, move to it
-   * @param targetPos target position
-   * @throws IOException on any problem
-   */
-  @Override
-  public synchronized void seek(long targetPos) throws IOException {
-    if (targetPos < 0) {
-      throw new EOFException(
-          FSExceptionMessages.NEGATIVE_SEEK);
-    }
-    //there's some special handling of near-local data
-    //as the seek can be omitted if it is in/adjacent
-    long offset = targetPos - pos;
-    if (LOG.isDebugEnabled()) {
-      LOG.debug("Seek to " + targetPos + "; current pos =" + pos
-                + "; offset="+offset);
-    }
-    if (offset == 0) {
-      LOG.debug("seek is no-op");
-      return;
-    }
-
-    if (offset < 0) {
-      LOG.debug("seek is backwards");
-    } else if ((rangeOffset + offset < bufferSize)) {
-      //if the seek is in  range of that requested, scan forwards
-      //instead of closing and re-opening a new HTTP connection
-      SwiftUtils.debug(LOG,
-                       "seek is within current stream"
-                       + "; pos= %d ; targetPos=%d; "
-                       + "offset= %d ; bufferOffset=%d",
-                       pos, targetPos, offset, rangeOffset);
-      try {
-        LOG.debug("chomping ");
-        chompBytes(offset);
-      } catch (IOException e) {
-        //this is assumed to be recoverable with a seek -or more likely to fail
-        LOG.debug("while chomping ",e);
-      }
-      if (targetPos - pos == 0) {
-        LOG.trace("chomping successful");
-        return;
-      }
-      LOG.trace("chomping failed");
-    } else {
-      if (LOG.isDebugEnabled()) {
-        LOG.debug("Seek is beyond buffer size of " + bufferSize);
-      }
-    }
-
-    innerClose("seeking to " + targetPos);
-    fillBuffer(targetPos);
-  }
-
-  /**
-   * Fill the buffer from the target position
-   * If the target position == current position, the
-   * read still goes ahead; this is a way of handling partial read failures
-   * @param targetPos target position
-   * @throws IOException IO problems on the read
-   */
-  private void fillBuffer(long targetPos) throws IOException {
-    long length = targetPos + bufferSize;
-    SwiftUtils.debug(LOG, "Fetching %d bytes starting at %d", length, targetPos);
-    HttpBodyContent blob = nativeStore.getObject(path, targetPos, length);
-    httpStream = blob.getInputStream();
-    updateStartOfBufferPosition(targetPos, blob.getContentLength());
-  }
-
-  @Override
-  public synchronized long getPos() throws IOException {
-    return pos;
-  }
-
-  /**
-   * This FS doesn't explicitly support multiple data sources, so
-   * return false here.
-   * @param targetPos the desired target position
-   * @return true if a new source of the data has been set up
-   * as the source of future reads
-   * @throws IOException IO problems
-   */
-  @Override
-  public boolean seekToNewSource(long targetPos) throws IOException {
-    return false;
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeOutputStream.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeOutputStream.java
deleted file mode 100644
index ac49a8a6495..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeOutputStream.java
+++ /dev/null
@@ -1,389 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.snative;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.swift.exceptions.SwiftConnectionClosedException;
-import org.apache.hadoop.fs.swift.exceptions.SwiftException;
-import org.apache.hadoop.fs.swift.exceptions.SwiftInternalStateException;
-import org.apache.hadoop.fs.swift.util.SwiftUtils;
-
-import java.io.BufferedOutputStream;
-import java.io.File;
-import java.io.FileInputStream;
-import java.io.FileOutputStream;
-import java.io.IOException;
-import java.io.OutputStream;
-
-/**
- * Output stream, buffers data on local disk.
- * Writes to Swift on the close() method, unless the
- * file is significantly large that it is being written as partitions.
- * In this case, the first partition is written on the first write that puts
- * data over the partition, as may later writes. The close() then causes
- * the final partition to be written, along with a partition manifest.
- */
-class SwiftNativeOutputStream extends OutputStream {
-  public static final int ATTEMPT_LIMIT = 3;
-  private long filePartSize;
-  private static final Logger LOG =
-      LoggerFactory.getLogger(SwiftNativeOutputStream.class);
-  private Configuration conf;
-  private String key;
-  private File backupFile;
-  private OutputStream backupStream;
-  private SwiftNativeFileSystemStore nativeStore;
-  private boolean closed;
-  private int partNumber;
-  private long blockOffset;
-  private long bytesWritten;
-  private long bytesUploaded;
-  private boolean partUpload = false;
-  final byte[] oneByte = new byte[1];
-
-  /**
-   * Create an output stream
-   * @param conf configuration to use
-   * @param nativeStore native store to write through
-   * @param key the key to write
-   * @param partSizeKB the partition size
-   * @throws IOException
-   */
-  @SuppressWarnings("IOResourceOpenedButNotSafelyClosed")
-  public SwiftNativeOutputStream(Configuration conf,
-                                 SwiftNativeFileSystemStore nativeStore,
-                                 String key,
-                                 long partSizeKB) throws IOException {
-    this.conf = conf;
-    this.key = key;
-    this.backupFile = newBackupFile();
-    this.nativeStore = nativeStore;
-    this.backupStream = new BufferedOutputStream(new FileOutputStream(backupFile));
-    this.partNumber = 1;
-    this.blockOffset = 0;
-    this.filePartSize = 1024L * partSizeKB;
-  }
-
-  private File newBackupFile() throws IOException {
-    File dir = new File(conf.get("hadoop.tmp.dir"));
-    if (!dir.mkdirs() && !dir.exists()) {
-      throw new SwiftException("Cannot create Swift buffer directory: " + dir);
-    }
-    File result = File.createTempFile("output-", ".tmp", dir);
-    result.deleteOnExit();
-    return result;
-  }
-
-  /**
-   * Flush the local backing stream.
-   * This does not trigger a flush of data to the remote blobstore.
-   * @throws IOException
-   */
-  @Override
-  public void flush() throws IOException {
-    backupStream.flush();
-  }
-
-  /**
-   * check that the output stream is open
-   *
-   * @throws SwiftException if it is not
-   */
-  private synchronized void verifyOpen() throws SwiftException {
-    if (closed) {
-      throw new SwiftConnectionClosedException();
-    }
-  }
-
-  /**
-   * Close the stream. This will trigger the upload of all locally cached
-   * data to the remote blobstore.
-   * @throws IOException IO problems uploading the data.
-   */
-  @Override
-  public synchronized void close() throws IOException {
-    if (closed) {
-      return;
-    }
-
-    try {
-      closed = true;
-      //formally declare as closed.
-      backupStream.close();
-      backupStream = null;
-      Path keypath = new Path(key);
-      if (partUpload) {
-        partUpload(true);
-        nativeStore.createManifestForPartUpload(keypath);
-      } else {
-        uploadOnClose(keypath);
-      }
-    } finally {
-      delete(backupFile);
-      backupFile = null;
-    }
-    assert backupStream == null: "backup stream has been reopened";
-  }
-
-  /**
-   * Upload a file when closed, either in one go, or, if the file is
-   * already partitioned, by uploading the remaining partition and a manifest.
-   * @param keypath key as a path
-   * @throws IOException IO Problems
-   */
-  private void uploadOnClose(Path keypath) throws IOException {
-    boolean uploadSuccess = false;
-    int attempt = 0;
-    while (!uploadSuccess) {
-      try {
-        ++attempt;
-        bytesUploaded += uploadFileAttempt(keypath, attempt);
-        uploadSuccess = true;
-      } catch (IOException e) {
-        LOG.info("Upload failed " + e, e);
-        if (attempt > ATTEMPT_LIMIT) {
-          throw e;
-        }
-      }
-    }
-}
-
-  @SuppressWarnings("IOResourceOpenedButNotSafelyClosed")
-  private long uploadFileAttempt(Path keypath, int attempt) throws IOException {
-    long uploadLen = backupFile.length();
-    SwiftUtils.debug(LOG, "Closing write of file %s;" +
-                          " localfile=%s of length %d - attempt %d",
-                     key,
-                     backupFile,
-                     uploadLen,
-                     attempt);
-
-    nativeStore.uploadFile(keypath,
-                           new FileInputStream(backupFile),
-                           uploadLen);
-    return uploadLen;
-  }
-
-  @Override
-  protected void finalize() throws Throwable {
-    if(!closed) {
-      LOG.warn("stream not closed");
-    }
-    if (backupFile != null) {
-      LOG.warn("Leaking backing file " + backupFile);
-    }
-  }
-
-  private void delete(File file) {
-    if (file != null) {
-      SwiftUtils.debug(LOG, "deleting %s", file);
-      if (!file.delete()) {
-        LOG.warn("Could not delete " + file);
-      }
-    }
-  }
-
-  @Override
-  public void write(int b) throws IOException {
-    //insert to a one byte array
-    oneByte[0] = (byte) b;
-    //then delegate to the array writing routine
-    write(oneByte, 0, 1);
-  }
-
-  @Override
-  public synchronized void write(byte[] buffer, int offset, int len) throws
-                                                                     IOException {
-    //validate args
-    if (offset < 0 || len < 0 || (offset + len) > buffer.length) {
-      throw new IndexOutOfBoundsException("Invalid offset/length for write");
-    }
-    //validate the output stream
-    verifyOpen();
-    SwiftUtils.debug(LOG, " write(offset=%d, len=%d)", offset, len);
-
-    // if the size of file is greater than the partition limit
-    while (blockOffset + len >= filePartSize) {
-      // - then partition the blob and upload as many partitions
-      // are needed.
-      //how many bytes to write for this partition.
-      int subWriteLen = (int) (filePartSize - blockOffset);
-      if (subWriteLen < 0 || subWriteLen > len) {
-        throw new SwiftInternalStateException("Invalid subwrite len: "
-                                              + subWriteLen
-                                              + " -buffer len: " + len);
-      }
-      writeToBackupStream(buffer, offset, subWriteLen);
-      //move the offset along and length down
-      offset += subWriteLen;
-      len -= subWriteLen;
-      //now upload the partition that has just been filled up
-      // (this also sets blockOffset=0)
-      partUpload(false);
-    }
-    //any remaining data is now written
-    writeToBackupStream(buffer, offset, len);
-  }
-
-  /**
-   * Write to the backup stream.
-   * Guarantees:
-   * <ol>
-   *   <li>backupStream is open</li>
-   *   <li>blockOffset + len &lt; filePartSize</li>
-   * </ol>
-   * @param buffer buffer to write
-   * @param offset offset in buffer
-   * @param len length of write.
-   * @throws IOException backup stream write failing
-   */
-  private void writeToBackupStream(byte[] buffer, int offset, int len) throws
-                                                                       IOException {
-    assert len >= 0  : "remainder to write is negative";
-    SwiftUtils.debug(LOG," writeToBackupStream(offset=%d, len=%d)", offset, len);
-    if (len == 0) {
-      //no remainder -downgrade to no-op
-      return;
-    }
-
-    //write the new data out to the backup stream
-    backupStream.write(buffer, offset, len);
-    //increment the counters
-    blockOffset += len;
-    bytesWritten += len;
-  }
-
-  /**
-   * Upload a single partition. This deletes the local backing-file,
-   * and re-opens it to create a new one.
-   * @param closingUpload is this the final upload of an upload
-   * @throws IOException on IO problems
-   */
-  @SuppressWarnings("IOResourceOpenedButNotSafelyClosed")
-  private void partUpload(boolean closingUpload) throws IOException {
-    if (backupStream != null) {
-      backupStream.close();
-    }
-
-    if (closingUpload && partUpload && backupFile.length() == 0) {
-      //skipping the upload if
-      // - it is close time
-      // - the final partition is 0 bytes long
-      // - one part has already been written
-      SwiftUtils.debug(LOG, "skipping upload of 0 byte final partition");
-      delete(backupFile);
-    } else {
-      partUpload = true;
-      boolean uploadSuccess = false;
-      int attempt = 0;
-      while(!uploadSuccess) {
-        try {
-          ++attempt;
-          bytesUploaded += uploadFilePartAttempt(attempt);
-          uploadSuccess = true;
-        } catch (IOException e) {
-          LOG.info("Upload failed " + e, e);
-          if (attempt > ATTEMPT_LIMIT) {
-            throw e;
-          }
-        }
-      }
-      delete(backupFile);
-      partNumber++;
-      blockOffset = 0;
-      if (!closingUpload) {
-        //if not the final upload, create a new output stream
-        backupFile = newBackupFile();
-        backupStream =
-          new BufferedOutputStream(new FileOutputStream(backupFile));
-      }
-    }
-  }
-
-  @SuppressWarnings("IOResourceOpenedButNotSafelyClosed")
-  private long uploadFilePartAttempt(int attempt) throws IOException {
-    long uploadLen = backupFile.length();
-    SwiftUtils.debug(LOG, "Uploading part %d of file %s;" +
-                          " localfile=%s of length %d  - attempt %d",
-                     partNumber,
-                     key,
-                     backupFile,
-                     uploadLen,
-                     attempt);
-    nativeStore.uploadFilePart(new Path(key),
-                               partNumber,
-                               new FileInputStream(backupFile),
-                               uploadLen);
-    return uploadLen;
-  }
-
-  /**
-   * Get the file partition size
-   * @return the partition size
-   */
-  long getFilePartSize() {
-    return filePartSize;
-  }
-
-  /**
-   * Query the number of partitions written
-   * This is intended for testing
-   * @return the of partitions already written to the remote FS
-   */
-  synchronized int getPartitionsWritten() {
-    return partNumber - 1;
-  }
-
-  /**
-   * Get the number of bytes written to the output stream.
-   * This should always be less than or equal to bytesUploaded.
-   * @return the number of bytes written to this stream
-   */
-  long getBytesWritten() {
-    return bytesWritten;
-  }
-
-  /**
-   * Get the number of bytes uploaded to remote Swift cluster.
-   * bytesUploaded -bytesWritten = the number of bytes left to upload
-   * @return the number of bytes written to the remote endpoint
-   */
-  long getBytesUploaded() {
-    return bytesUploaded;
-  }
-
-  @Override
-  public String toString() {
-    return "SwiftNativeOutputStream{" +
-           ", key='" + key + '\'' +
-           ", backupFile=" + backupFile +
-           ", closed=" + closed +
-           ", filePartSize=" + filePartSize +
-           ", partNumber=" + partNumber +
-           ", blockOffset=" + blockOffset +
-           ", partUpload=" + partUpload +
-           ", nativeStore=" + nativeStore +
-           ", bytesWritten=" + bytesWritten +
-           ", bytesUploaded=" + bytesUploaded +
-           '}';
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftObjectFileStatus.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftObjectFileStatus.java
deleted file mode 100644
index ca8adc6244c..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftObjectFileStatus.java
+++ /dev/null
@@ -1,115 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.snative;
-
-import java.util.Date;
-
-/**
- * Java mapping of Swift JSON file status.
- * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
- * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
- */
-
-class SwiftObjectFileStatus {
-  private long bytes;
-  private String content_type;
-  private String hash;
-  private Date last_modified;
-  private String name;
-  private String subdir;
-
-  SwiftObjectFileStatus() {
-  }
-
-  SwiftObjectFileStatus(long bytes, String content_type, String hash,
-                        Date last_modified, String name) {
-    this.bytes = bytes;
-    this.content_type = content_type;
-    this.hash = hash;
-    this.last_modified = last_modified;
-    this.name = name;
-  }
-
-  public long getBytes() {
-    return bytes;
-  }
-
-  public void setBytes(long bytes) {
-    this.bytes = bytes;
-  }
-
-  public String getContent_type() {
-    return content_type;
-  }
-
-  public void setContent_type(String content_type) {
-    this.content_type = content_type;
-  }
-
-  public String getHash() {
-    return hash;
-  }
-
-  public void setHash(String hash) {
-    this.hash = hash;
-  }
-
-  public Date getLast_modified() {
-    return last_modified;
-  }
-
-  public void setLast_modified(Date last_modified) {
-    this.last_modified = last_modified;
-  }
-
-  public String getName() {
-    return pathToRootPath(name);
-  }
-
-  public void setName(String name) {
-    this.name = name;
-  }
-
-  public String getSubdir() {
-    return pathToRootPath(subdir);
-  }
-
-  public void setSubdir(String subdir) {
-    this.subdir = subdir;
-  }
-
-  /**
-   * If path doesn't starts with '/'
-   * method will concat '/'
-   *
-   * @param path specified path
-   * @return root path string
-   */
-  private String pathToRootPath(String path) {
-    if (path == null) {
-      return null;
-    }
-
-    if (path.startsWith("/")) {
-      return path;
-    }
-
-    return "/".concat(path);
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/Duration.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/Duration.java
deleted file mode 100644
index 3071f946824..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/Duration.java
+++ /dev/null
@@ -1,57 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.util;
-
-public class Duration {
-
-  private final long started;
-  private long finished;
-
-  public Duration() {
-    started = time();
-    finished = started;
-  }
-
-  private long time() {
-    return System.currentTimeMillis();
-  }
-
-  public void finished() {
-    finished = time();
-  }
-
-  public String getDurationString() {
-    return humanTime(value());
-  }
-
-  public static String humanTime(long time) {
-    long seconds = (time / 1000);
-    long minutes = (seconds / 60);
-    return String.format("%d:%02d:%03d", minutes, seconds % 60, time % 1000);
-  }
-
-  @Override
-  public String toString() {
-    return getDurationString();
-  }
-
-  public long value() {
-    return finished -started;
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/DurationStats.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/DurationStats.java
deleted file mode 100644
index 734cf8b6dc1..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/DurationStats.java
+++ /dev/null
@@ -1,154 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.util;
-
-/**
- * Build ongoing statistics from duration data
- */
-public class DurationStats {
-
-  final String operation;
-  int n;
-  long sum;
-  long min;
-  long max;
-  double mean, m2;
-
-  /**
-   * Construct statistics for a given operation.
-   * @param operation operation
-   */
-  public DurationStats(String operation) {
-    this.operation = operation;
-    reset();
-  }
-
-  /**
-   * construct from another stats entry;
-   * all value are copied.
-   * @param that the source statistics
-   */
-  public DurationStats(DurationStats that) {
-    operation = that.operation;
-    n = that.n;
-    sum = that.sum;
-    min = that.min;
-    max = that.max;
-    mean = that.mean;
-    m2 = that.m2;
-  }
-
-  /**
-   * Add a duration
-   * @param duration the new duration
-   */
-  public void add(Duration duration) {
-    add(duration.value());
-  }
-
-  /**
-   * Add a number
-   * @param x the number
-   */
-  public void add(long x) {
-    n++;
-    sum += x;
-    double delta = x - mean;
-    mean += delta / n;
-    m2 += delta * (x - mean);
-    if (x < min) {
-      min = x;
-    }
-    if (x > max) {
-      max = x;
-    }
-  }
-
-  /**
-   * Reset the data
-   */
-  public void reset() {
-    n = 0;
-    sum = 0;
-    sum = 0;
-    min = 10000000;
-    max = 0;
-    mean = 0;
-    m2 = 0;
-  }
-
-  /**
-   * Get the number of entries sampled
-   * @return the number of durations added
-   */
-  public int getCount() {
-    return n;
-  }
-
-  /**
-   * Get the sum of all durations
-   * @return all the durations
-   */
-  public long getSum() {
-    return sum;
-  }
-
-  /**
-   * Get the arithmetic mean of the aggregate statistics
-   * @return the arithmetic mean
-   */
-  public double getArithmeticMean() {
-    return mean;
-  }
-
-  /**
-   * Variance, sigma^2
-   * @return variance, or, if no samples are there, 0.
-   */
-  public double getVariance() {
-    return n > 0 ? (m2 / (n - 1)) : 0;
-  }
-
-  /**
-   * Get the std deviation, sigma
-   * @return the stddev, 0 may mean there are no samples.
-   */
-  public double getDeviation() {
-    double variance = getVariance();
-    return (variance > 0) ? Math.sqrt(variance) : 0;
-  }
-
-  /**
-   * Covert to a useful string
-   * @return a human readable summary
-   */
-  @Override
-  public String toString() {
-    return String.format(
-      "%s count=%d total=%.3fs mean=%.3fs stddev=%.3fs min=%.3fs max=%.3fs",
-      operation,
-      n,
-      sum / 1000.0,
-      mean / 1000.0,
-      getDeviation() / 1000000.0,
-      min / 1000.0,
-      max / 1000.0);
-  }
-
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/DurationStatsTable.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/DurationStatsTable.java
deleted file mode 100644
index 58f8f0b641d..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/DurationStatsTable.java
+++ /dev/null
@@ -1,77 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.fs.swift.util;
-
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-
-/**
- * Build a duration stats table to which you can add statistics.
- * Designed to be multithreaded
- */
-public class DurationStatsTable {
-
-  private Map<String,DurationStats> statsTable
-    = new HashMap<String, DurationStats>(6);
-
-  /**
-   * Add an operation
-   * @param operation operation name
-   * @param duration duration
-   */
-  public void add(String operation, Duration duration, boolean success) {
-    DurationStats durationStats;
-    String key = operation;
-    if (!success) {
-      key += "-FAIL";
-    }
-    synchronized (this) {
-      durationStats = statsTable.get(key);
-      if (durationStats == null) {
-        durationStats = new DurationStats(key);
-        statsTable.put(key, durationStats);
-      }
-    }
-    synchronized (durationStats) {
-      durationStats.add(duration);
-    }
-  }
-
-  /**
-   * Get the current duration statistics
-   * @return a snapshot of the statistics
-   */
-   public synchronized List<DurationStats> getDurationStatistics() {
-     List<DurationStats> results = new ArrayList<DurationStats>(statsTable.size());
-     for (DurationStats stat: statsTable.values()) {
-       results.add(new DurationStats(stat));
-     }
-     return results;
-   }
-
-  /**
-   * reset the values of the statistics. This doesn't delete them, merely zeroes them.
-   */
-  public synchronized void reset() {
-    for (DurationStats stat : statsTable.values()) {
-      stat.reset();
-    }
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/HttpResponseUtils.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/HttpResponseUtils.java
deleted file mode 100644
index 1cc340d83d9..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/HttpResponseUtils.java
+++ /dev/null
@@ -1,121 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.util;
-
-import java.io.ByteArrayOutputStream;
-import java.io.IOException;
-import java.io.InputStream;
-
-import org.apache.http.Header;
-import org.apache.http.HttpResponse;
-import org.apache.http.util.EncodingUtils;
-
-import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.HEADER_CONTENT_LENGTH;
-
-/**
- * Utility class for parsing HttpResponse. This class is implemented like
- * {@code org.apache.commons.httpclient.HttpMethodBase.java} in httpclient 3.x.
- */
-public abstract class HttpResponseUtils {
-
-  /**
-   * Returns the response body of the HTTPResponse, if any, as an array of bytes.
-   * If response body is not available or cannot be read, returns <tt>null</tt>
-   *
-   * Note: This will cause the entire response body to be buffered in memory. A
-   * malicious server may easily exhaust all the VM memory. It is strongly
-   * recommended, to use getResponseAsStream if the content length of the
-   * response is unknown or reasonably large.
-   *
-   * @param resp HttpResponse
-   * @return The response body
-   * @throws IOException If an I/O (transport) problem occurs while obtaining
-   * the response body.
-   */
-  public static byte[] getResponseBody(HttpResponse resp) throws IOException {
-    try(InputStream instream = resp.getEntity().getContent()) {
-      if (instream != null) {
-        long contentLength = resp.getEntity().getContentLength();
-        if (contentLength > Integer.MAX_VALUE) {
-          //guard integer cast from overflow
-          throw new IOException("Content too large to be buffered: "
-              + contentLength +" bytes");
-        }
-        ByteArrayOutputStream outstream = new ByteArrayOutputStream(
-            contentLength > 0 ? (int) contentLength : 4*1024);
-        byte[] buffer = new byte[4096];
-        int len;
-        while ((len = instream.read(buffer)) > 0) {
-          outstream.write(buffer, 0, len);
-        }
-        outstream.close();
-        return outstream.toByteArray();
-      }
-    }
-    return null;
-  }
-
-  /**
-   * Returns the response body of the HTTPResponse, if any, as a {@link String}.
-   * If response body is not available or cannot be read, returns <tt>null</tt>
-   * The string conversion on the data is done using UTF-8.
-   *
-   * Note: This will cause the entire response body to be buffered in memory. A
-   * malicious server may easily exhaust all the VM memory. It is strongly
-   * recommended, to use getResponseAsStream if the content length of the
-   * response is unknown or reasonably large.
-   *
-   * @param resp HttpResponse
-   * @return The response body.
-   * @throws IOException If an I/O (transport) problem occurs while obtaining
-   * the response body.
-   */
-  public static String getResponseBodyAsString(HttpResponse resp)
-      throws IOException {
-    byte[] rawdata = getResponseBody(resp);
-    if (rawdata != null) {
-      return EncodingUtils.getString(rawdata, "UTF-8");
-    } else {
-      return null;
-    }
-  }
-
-  /**
-   * Return the length (in bytes) of the response body, as specified in a
-   * <tt>Content-Length</tt> header.
-   *
-   * <p>
-   * Return <tt>-1</tt> when the content-length is unknown.
-   * </p>
-   *
-   * @param resp HttpResponse
-   * @return content length, if <tt>Content-Length</tt> header is available.
-   *          <tt>0</tt> indicates that the request has no body.
-   *          If <tt>Content-Length</tt> header is not present, the method
-   *          returns <tt>-1</tt>.
-   */
-  public static long getContentLength(HttpResponse resp) {
-    Header header = resp.getFirstHeader(HEADER_CONTENT_LENGTH);
-    if (header == null) {
-      return -1;
-    } else {
-      return Long.parseLong(header.getValue());
-    }
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/JSONUtil.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/JSONUtil.java
deleted file mode 100644
index fee7e7f5697..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/JSONUtil.java
+++ /dev/null
@@ -1,124 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.util;
-
-import com.fasterxml.jackson.core.JsonGenerationException;
-import com.fasterxml.jackson.core.type.TypeReference;
-import com.fasterxml.jackson.databind.JsonMappingException;
-import com.fasterxml.jackson.databind.ObjectMapper;
-import com.fasterxml.jackson.databind.type.CollectionType;
-import org.apache.hadoop.fs.swift.exceptions.SwiftJsonMarshallingException;
-
-import java.io.IOException;
-import java.io.StringWriter;
-import java.io.Writer;
-
-
-public class JSONUtil {
-  private static ObjectMapper jsonMapper = new ObjectMapper();
-
-  /**
-   * Private constructor.
-   */
-  private JSONUtil() {
-  }
-
-  /**
-   * Converting object to JSON string. If errors appears throw
-   * MeshinException runtime exception.
-   *
-   * @param object The object to convert.
-   * @return The JSON string representation.
-   * @throws IOException IO issues
-   * @throws SwiftJsonMarshallingException failure to generate JSON
-   */
-  public static String toJSON(Object object) throws
-                                             IOException {
-    Writer json = new StringWriter();
-    try {
-      jsonMapper.writeValue(json, object);
-      return json.toString();
-    } catch (JsonGenerationException | JsonMappingException e) {
-      throw new SwiftJsonMarshallingException(e.toString(), e);
-    }
-  }
-
-  /**
-   * Convert string representation to object. If errors appears throw
-   * Exception runtime exception.
-   *
-   * @param value The JSON string.
-   * @param klazz The class to convert.
-   * @return The Object of the given class.
-   */
-  public static <T> T toObject(String value, Class<T> klazz) throws
-                                                             IOException {
-    try {
-      return jsonMapper.readValue(value, klazz);
-    } catch (JsonGenerationException e) {
-      throw new SwiftJsonMarshallingException(e.toString()
-                                              + " source: " + value,
-                                              e);
-    } catch (JsonMappingException e) {
-      throw new SwiftJsonMarshallingException(e.toString()
-                                              + " source: " + value,
-                                              e);
-    }
-  }
-
-  /**
-   * @param value         json string
-   * @param typeReference class type reference
-   * @param <T>           type
-   * @return deserialized  T object
-   */
-  @SuppressWarnings("unchecked")
-  public static <T> T toObject(String value,
-                               final TypeReference<T> typeReference)
-            throws IOException {
-    try {
-      return (T)jsonMapper.readValue(value, typeReference);
-    } catch (JsonGenerationException | JsonMappingException e) {
-      throw new SwiftJsonMarshallingException("Error generating response", e);
-    }
-  }
-
-  /**
-   * @param value          json string
-   * @param collectionType class describing how to deserialize collection of objects
-   * @param <T>            type
-   * @return deserialized  T object
-   */
-  @SuppressWarnings("unchecked")
-  public static <T> T toObject(String value,
-                               final CollectionType collectionType)
-              throws IOException {
-    try {
-      return (T)jsonMapper.readValue(value, collectionType);
-    } catch (JsonGenerationException | JsonMappingException e) {
-      throw new SwiftJsonMarshallingException(e.toString()
-                                              + " source: " + value,
-                                              e);
-    }
-  }
-
-  public static ObjectMapper getJsonMapper() {
-    return jsonMapper;
-  }
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftObjectPath.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftObjectPath.java
deleted file mode 100644
index 791509a9e03..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftObjectPath.java
+++ /dev/null
@@ -1,187 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.util;
-
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException;
-import org.apache.hadoop.fs.swift.http.RestClientBindings;
-
-import java.net.URI;
-import java.util.regex.Pattern;
-
-/**
- * Swift hierarchy mapping of (container, path)
- */
-public final class SwiftObjectPath {
-  private static final Pattern PATH_PART_PATTERN = Pattern.compile(".*/AUTH_\\w*/");
-
-  /**
-   * Swift container
-   */
-  private final String container;
-
-  /**
-   * swift object
-   */
-  private final String object;
-
-  private final String uriPath;
-
-  /**
-   * Build an instance from a (host, object) pair
-   *
-   * @param container container name
-   * @param object    object ref underneath the container
-   */
-  public SwiftObjectPath(String container, String object) {
-
-    if (object == null) {
-      throw new IllegalArgumentException("object name can't be null");
-    }
-
-    this.container = container;
-    this.object = URI.create(object).getPath();
-    uriPath = buildUriPath();
-  }
-
-  public String getContainer() {
-    return container;
-  }
-
-  public String getObject() {
-    return object;
-  }
-
-  @Override
-  public boolean equals(Object o) {
-    if (this == o) return true;
-    if (!(o instanceof SwiftObjectPath)) return false;
-    final SwiftObjectPath that = (SwiftObjectPath) o;
-    return this.toUriPath().equals(that.toUriPath());
-  }
-
-  @Override
-  public int hashCode() {
-    int result = container.hashCode();
-    result = 31 * result + object.hashCode();
-    return result;
-  }
-
-  private String buildUriPath() {
-    return SwiftUtils.joinPaths(container, object);
-  }
-
-  public String toUriPath() {
-    return uriPath;
-  }
-
-  @Override
-  public String toString() {
-    return toUriPath();
-  }
-
-  /**
-   * Test for the object matching a path, ignoring the container
-   * value.
-   *
-   * @param path path string
-   * @return true iff the object's name matches the path
-   */
-  public boolean objectMatches(String path) {
-    return object.equals(path);
-  }
-
-
-  /**
-   * Query to see if the possibleChild object is a child path of this.
-   * object.
-   *
-   * The test is done by probing for the path of the this object being
-   * at the start of the second -with a trailing slash, and both
-   * containers being equal
-   *
-   * @param possibleChild possible child dir
-   * @return true iff the possibleChild is under this object
-   */
-  public boolean isEqualToOrParentOf(SwiftObjectPath possibleChild) {
-    String origPath = toUriPath();
-    String path = origPath;
-    if (!path.endsWith("/")) {
-      path = path + "/";
-    }
-    String childPath = possibleChild.toUriPath();
-    return childPath.equals(origPath) || childPath.startsWith(path);
-  }
-
-  /**
-   * Create a path tuple of (container, path), where the container is
-   * chosen from the host of the URI.
-   *
-   * @param uri  uri to start from
-   * @param path path underneath
-   * @return a new instance.
-   * @throws SwiftConfigurationException if the URI host doesn't parse into
-   *                                     container.service
-   */
-  public static SwiftObjectPath fromPath(URI uri,
-                                         Path path)
-          throws SwiftConfigurationException {
-    return fromPath(uri, path, false);
-  }
-
-  /**
-   * Create a path tuple of (container, path), where the container is
-   * chosen from the host of the URI.
-   * A trailing slash can be added to the path. This is the point where
-   * these /-es need to be appended, because when you construct a {@link Path}
-   * instance, {@link Path#normalizePath(String, String)} is called
-   * -which strips off any trailing slash.
-   *
-   * @param uri              uri to start from
-   * @param path             path underneath
-   * @param addTrailingSlash should a trailing slash be added if there isn't one.
-   * @return a new instance.
-   * @throws SwiftConfigurationException if the URI host doesn't parse into
-   *                                     container.service
-   */
-  public static SwiftObjectPath fromPath(URI uri,
-                                         Path path,
-                                         boolean addTrailingSlash)
-          throws SwiftConfigurationException {
-
-    String url =
-            path.toUri().getPath().replaceAll(PATH_PART_PATTERN.pattern(), "");
-    //add a trailing slash if needed
-    if (addTrailingSlash && !url.endsWith("/")) {
-      url += "/";
-    }
-
-    String container = uri.getHost();
-    if (container == null) {
-      //no container, not good: replace with ""
-      container = "";
-    } else if (container.contains(".")) {
-      //its a container.service URI. Strip the container
-      container = RestClientBindings.extractContainerName(container);
-    }
-    return new SwiftObjectPath(container, url);
-  }
-
-
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftTestUtils.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftTestUtils.java
deleted file mode 100644
index 2e3abce251a..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftTestUtils.java
+++ /dev/null
@@ -1,547 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.util;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FSDataInputStream;
-import org.apache.hadoop.fs.FSDataOutputStream;
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException;
-import org.junit.internal.AssumptionViolatedException;
-
-import java.io.FileNotFoundException;
-import java.io.IOException;
-import java.net.URI;
-import java.net.URISyntaxException;
-import java.util.Properties;
-
-/**
- * Utilities used across test cases
- */
-public class SwiftTestUtils extends org.junit.Assert {
-
-  private static final Logger LOG =
-      LoggerFactory.getLogger(SwiftTestUtils.class);
-
-  public static final String TEST_FS_SWIFT = "test.fs.swift.name";
-  public static final String IO_FILE_BUFFER_SIZE = "io.file.buffer.size";
-
-  /**
-   * Get the test URI
-   * @param conf configuration
-   * @throws SwiftConfigurationException missing parameter or bad URI
-   */
-  public static URI getServiceURI(Configuration conf) throws
-                                                      SwiftConfigurationException {
-    String instance = conf.get(TEST_FS_SWIFT);
-    if (instance == null) {
-      throw new SwiftConfigurationException(
-        "Missing configuration entry " + TEST_FS_SWIFT);
-    }
-    try {
-      return new URI(instance);
-    } catch (URISyntaxException e) {
-      throw new SwiftConfigurationException("Bad URI: " + instance);
-    }
-  }
-
-  public static boolean hasServiceURI(Configuration conf) {
-    String instance = conf.get(TEST_FS_SWIFT);
-    return instance != null;
-  }
-
-  /**
-   * Assert that a property in the property set matches the expected value
-   * @param props property set
-   * @param key property name
-   * @param expected expected value. If null, the property must not be in the set
-   */
-  public static void assertPropertyEquals(Properties props,
-                                          String key,
-                                          String expected) {
-    String val = props.getProperty(key);
-    if (expected == null) {
-      assertNull("Non null property " + key + " = " + val, val);
-    } else {
-      assertEquals("property " + key + " = " + val,
-                          expected,
-                          val);
-    }
-  }
-
-  /**
-   *
-   * Write a file and read it in, validating the result. Optional flags control
-   * whether file overwrite operations should be enabled, and whether the
-   * file should be deleted afterwards.
-   *
-   * If there is a mismatch between what was written and what was expected,
-   * a small range of bytes either side of the first error are logged to aid
-   * diagnosing what problem occurred -whether it was a previous file
-   * or a corrupting of the current file. This assumes that two
-   * sequential runs to the same path use datasets with different character
-   * moduli.
-   *
-   * @param fs filesystem
-   * @param path path to write to
-   * @param len length of data
-   * @param overwrite should the create option allow overwrites?
-   * @param delete should the file be deleted afterwards? -with a verification
-   * that it worked. Deletion is not attempted if an assertion has failed
-   * earlier -it is not in a <code>finally{}</code> block.
-   * @throws IOException IO problems
-   */
-  public static void writeAndRead(FileSystem fs,
-                                  Path path,
-                                  byte[] src,
-                                  int len,
-                                  int blocksize,
-                                  boolean overwrite,
-                                  boolean delete) throws IOException {
-    fs.mkdirs(path.getParent());
-
-    writeDataset(fs, path, src, len, blocksize, overwrite);
-
-    byte[] dest = readDataset(fs, path, len);
-
-    compareByteArrays(src, dest, len);
-
-    if (delete) {
-      boolean deleted = fs.delete(path, false);
-      assertTrue("Deleted", deleted);
-      assertPathDoesNotExist(fs, "Cleanup failed", path);
-    }
-  }
-
-  /**
-   * Write a file.
-   * Optional flags control
-   * whether file overwrite operations should be enabled
-   * @param fs filesystem
-   * @param path path to write to
-   * @param len length of data
-   * @param overwrite should the create option allow overwrites?
-   * @throws IOException IO problems
-   */
-  public static void writeDataset(FileSystem fs,
-                                   Path path,
-                                   byte[] src,
-                                   int len,
-                                   int blocksize,
-                                   boolean overwrite) throws IOException {
-    assertTrue(
-      "Not enough data in source array to write " + len + " bytes",
-      src.length >= len);
-    FSDataOutputStream out = fs.create(path,
-                                       overwrite,
-                                       fs.getConf()
-                                         .getInt(IO_FILE_BUFFER_SIZE,
-                                                 4096),
-                                       (short) 1,
-                                       blocksize);
-    out.write(src, 0, len);
-    out.close();
-    assertFileHasLength(fs, path, len);
-  }
-
-  /**
-   * Read the file and convert to a byte dataset
-   * @param fs filesystem
-   * @param path path to read from
-   * @param len length of data to read
-   * @return the bytes
-   * @throws IOException IO problems
-   */
-  public static byte[] readDataset(FileSystem fs, Path path, int len)
-      throws IOException {
-    FSDataInputStream in = fs.open(path);
-    byte[] dest = new byte[len];
-    try {
-      in.readFully(0, dest);
-    } finally {
-      in.close();
-    }
-    return dest;
-  }
-
-  /**
-   * Assert that the array src[0..len] and dest[] are equal
-   * @param src source data
-   * @param dest actual
-   * @param len length of bytes to compare
-   */
-  public static void compareByteArrays(byte[] src,
-                                       byte[] dest,
-                                       int len) {
-    assertEquals("Number of bytes read != number written",
-                        len, dest.length);
-    int errors = 0;
-    int first_error_byte = -1;
-    for (int i = 0; i < len; i++) {
-      if (src[i] != dest[i]) {
-        if (errors == 0) {
-          first_error_byte = i;
-        }
-        errors++;
-      }
-    }
-
-    if (errors > 0) {
-      String message = String.format(" %d errors in file of length %d",
-                                     errors, len);
-      LOG.warn(message);
-      // the range either side of the first error to print
-      // this is a purely arbitrary number, to aid user debugging
-      final int overlap = 10;
-      for (int i = Math.max(0, first_error_byte - overlap);
-           i < Math.min(first_error_byte + overlap, len);
-           i++) {
-        byte actual = dest[i];
-        byte expected = src[i];
-        String letter = toChar(actual);
-        String line = String.format("[%04d] %2x %s%n", i, actual, letter);
-        if (expected != actual) {
-          line = String.format("[%04d] %2x %s -expected %2x %s%n",
-                               i,
-                               actual,
-                               letter,
-                               expected,
-                               toChar(expected));
-        }
-        LOG.warn(line);
-      }
-      fail(message);
-    }
-  }
-
-  /**
-   * Convert a byte to a character for printing. If the
-   * byte value is &lt; 32 -and hence unprintable- the byte is
-   * returned as a two digit hex value
-   * @param b byte
-   * @return the printable character string
-   */
-  public static String toChar(byte b) {
-    if (b >= 0x20) {
-      return Character.toString((char) b);
-    } else {
-      return String.format("%02x", b);
-    }
-  }
-
-  public static String toChar(byte[] buffer) {
-    StringBuilder builder = new StringBuilder(buffer.length);
-    for (byte b : buffer) {
-      builder.append(toChar(b));
-    }
-    return builder.toString();
-  }
-
-  public static byte[] toAsciiByteArray(String s) {
-    char[] chars = s.toCharArray();
-    int len = chars.length;
-    byte[] buffer = new byte[len];
-    for (int i = 0; i < len; i++) {
-      buffer[i] = (byte) (chars[i] & 0xff);
-    }
-    return buffer;
-  }
-
-  public static void cleanupInTeardown(FileSystem fileSystem,
-                                       String cleanupPath) {
-    cleanup("TEARDOWN", fileSystem, cleanupPath);
-  }
-
-  public static void cleanup(String action,
-                             FileSystem fileSystem,
-                             String cleanupPath) {
-    noteAction(action);
-    try {
-      if (fileSystem != null) {
-        fileSystem.delete(fileSystem.makeQualified(new Path(cleanupPath)),
-                          true);
-      }
-    } catch (Exception e) {
-      LOG.error("Error deleting in "+ action + " - "  + cleanupPath + ": " + e, e);
-    }
-  }
-
-  public static void noteAction(String action) {
-    if (LOG.isDebugEnabled()) {
-      LOG.debug("==============  "+ action +" =============");
-    }
-  }
-
-  /**
-   * downgrade a failure to a message and a warning, then an
-   * exception for the Junit test runner to mark as failed
-   * @param message text message
-   * @param failure what failed
-   * @throws AssumptionViolatedException always
-   */
-  public static void downgrade(String message, Throwable failure) {
-    LOG.warn("Downgrading test " + message, failure);
-    AssumptionViolatedException ave =
-      new AssumptionViolatedException(failure, null);
-    throw ave;
-  }
-
-  /**
-   * report an overridden test as unsupported
-   * @param message message to use in the text
-   * @throws AssumptionViolatedException always
-   */
-  public static void unsupported(String message) {
-    throw new AssumptionViolatedException(message);
-  }
-
-  /**
-   * report a test has been skipped for some reason
-   * @param message message to use in the text
-   * @throws AssumptionViolatedException always
-   */
-  public static void skip(String message) {
-    throw new AssumptionViolatedException(message);
-  }
-
-
-  /**
-   * Make an assertion about the length of a file
-   * @param fs filesystem
-   * @param path path of the file
-   * @param expected expected length
-   * @throws IOException on File IO problems
-   */
-  public static void assertFileHasLength(FileSystem fs, Path path,
-                                         int expected) throws IOException {
-    FileStatus status = fs.getFileStatus(path);
-    assertEquals(
-      "Wrong file length of file " + path + " status: " + status,
-      expected,
-      status.getLen());
-  }
-
-  /**
-   * Assert that a path refers to a directory
-   * @param fs filesystem
-   * @param path path of the directory
-   * @throws IOException on File IO problems
-   */
-  public static void assertIsDirectory(FileSystem fs,
-                                       Path path) throws IOException {
-    FileStatus fileStatus = fs.getFileStatus(path);
-    assertIsDirectory(fileStatus);
-  }
-
-  /**
-   * Assert that a path refers to a directory
-   * @param fileStatus stats to check
-   */
-  public static void assertIsDirectory(FileStatus fileStatus) {
-    assertTrue("Should be a dir -but isn't: " + fileStatus,
-                      fileStatus.isDirectory());
-  }
-
-  /**
-   * Write the text to a file, returning the converted byte array
-   * for use in validating the round trip
-   * @param fs filesystem
-   * @param path path of file
-   * @param text text to write
-   * @param overwrite should the operation overwrite any existing file?
-   * @return the read bytes
-   * @throws IOException on IO problems
-   */
-  public static byte[] writeTextFile(FileSystem fs,
-                                   Path path,
-                                   String text,
-                                   boolean overwrite) throws IOException {
-    FSDataOutputStream stream = fs.create(path, overwrite);
-    byte[] bytes = new byte[0];
-    if (text != null) {
-      bytes = toAsciiByteArray(text);
-      stream.write(bytes);
-    }
-    stream.close();
-    return bytes;
-  }
-
-  /**
-   * Touch a file: fails if it is already there
-   * @param fs filesystem
-   * @param path path
-   * @throws IOException IO problems
-   */
-  public static void touch(FileSystem fs,
-                           Path path) throws IOException {
-    fs.delete(path, true);
-    writeTextFile(fs, path, null, false);
-  }
-
-  public static void assertDeleted(FileSystem fs,
-                                   Path file,
-                                   boolean recursive) throws IOException {
-    assertPathExists(fs, "about to be deleted file", file);
-    boolean deleted = fs.delete(file, recursive);
-    String dir = ls(fs, file.getParent());
-    assertTrue("Delete failed on " + file + ": " + dir, deleted);
-    assertPathDoesNotExist(fs, "Deleted file", file);
-  }
-
-  /**
-   * Read in "length" bytes, convert to an ascii string
-   * @param fs filesystem
-   * @param path path to read
-   * @param length #of bytes to read.
-   * @return the bytes read and converted to a string
-   * @throws IOException
-   */
-  public static String readBytesToString(FileSystem fs,
-                                  Path path,
-                                  int length) throws IOException {
-    FSDataInputStream in = fs.open(path);
-    try {
-      byte[] buf = new byte[length];
-      in.readFully(0, buf);
-      return toChar(buf);
-    } finally {
-      in.close();
-    }
-  }
-
-  public static String getDefaultWorkingDirectory() {
-    return "/user/" + System.getProperty("user.name");
-  }
-
-  public static String ls(FileSystem fileSystem, Path path) throws IOException {
-    return SwiftUtils.ls(fileSystem, path);
-  }
-
-  public static String dumpStats(String pathname, FileStatus[] stats) {
-    return pathname + SwiftUtils.fileStatsToString(stats,"\n");
-  }
-
-  /**
-   /**
-   * Assert that a file exists and whose {@link FileStatus} entry
-   * declares that this is a file and not a symlink or directory.
-   * @param fileSystem filesystem to resolve path against
-   * @param filename name of the file
-   * @throws IOException IO problems during file operations
-   */
-  public static void assertIsFile(FileSystem fileSystem, Path filename) throws
-                                                                 IOException {
-    assertPathExists(fileSystem, "Expected file", filename);
-    FileStatus status = fileSystem.getFileStatus(filename);
-    String fileInfo = filename + "  " + status;
-    assertFalse("File claims to be a directory " + fileInfo,
-                status.isDirectory());
-/* disabled for Hadoop v1 compatibility
-    assertFalse("File claims to be a symlink " + fileInfo,
-                       status.isSymlink());
-*/
-  }
-
-  /**
-   * Create a dataset for use in the tests; all data is in the range
-   * base to (base+modulo-1) inclusive
-   * @param len length of data
-   * @param base base of the data
-   * @param modulo the modulo
-   * @return the newly generated dataset
-   */
-  public static byte[] dataset(int len, int base, int modulo) {
-    byte[] dataset = new byte[len];
-    for (int i = 0; i < len; i++) {
-      dataset[i] = (byte) (base + (i % modulo));
-    }
-    return dataset;
-  }
-
-  /**
-   * Assert that a path exists -but make no assertions as to the
-   * type of that entry
-   *
-   * @param fileSystem filesystem to examine
-   * @param message message to include in the assertion failure message
-   * @param path path in the filesystem
-   * @throws IOException IO problems
-   */
-  public static void assertPathExists(FileSystem fileSystem, String message,
-                               Path path) throws IOException {
-    try {
-      fileSystem.getFileStatus(path);
-    } catch (FileNotFoundException e) {
-      //failure, report it
-      throw (IOException)new FileNotFoundException(message + ": not found "
-          + path + " in " + path.getParent() + ": " + e + " -- "
-           + ls(fileSystem, path.getParent())).initCause(e);
-    }
-  }
-
-  /**
-   * Assert that a path does not exist
-   *
-   * @param fileSystem filesystem to examine
-   * @param message message to include in the assertion failure message
-   * @param path path in the filesystem
-   * @throws IOException IO problems
-   */
-  public static void assertPathDoesNotExist(FileSystem fileSystem,
-                                            String message,
-                                            Path path) throws IOException {
-    try {
-      FileStatus status = fileSystem.getFileStatus(path);
-      fail(message + ": unexpectedly found " + path + " as  " + status);
-    } catch (FileNotFoundException expected) {
-      //this is expected
-
-    }
-  }
-
-
-  /**
-   * Assert that a FileSystem.listStatus on a dir finds the subdir/child entry
-   * @param fs filesystem
-   * @param dir directory to scan
-   * @param subdir full path to look for
-   * @throws IOException IO problems
-   */
-  public static void assertListStatusFinds(FileSystem fs,
-                                           Path dir,
-                                           Path subdir) throws IOException {
-    FileStatus[] stats = fs.listStatus(dir);
-    boolean found = false;
-    StringBuilder builder = new StringBuilder();
-    for (FileStatus stat : stats) {
-      builder.append(stat.toString()).append('\n');
-      if (stat.getPath().equals(subdir)) {
-        found = true;
-      }
-    }
-    assertTrue("Path " + subdir
-                      + " not found in directory " + dir + ":" + builder,
-                      found);
-  }
-
-}
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftUtils.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftUtils.java
deleted file mode 100644
index f218a80595a..00000000000
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftUtils.java
+++ /dev/null
@@ -1,216 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- *  or more contributor license agreements.  See the NOTICE file
- *  distributed with this work for additional information
- *  regarding copyright ownership.  The ASF licenses this file
- *  to you under the Apache License, Version 2.0 (the
- *  "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- */
-
-package org.apache.hadoop.fs.swift.util;
-
-import org.slf4j.Logger;
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-
-import java.io.FileNotFoundException;
-import java.io.IOException;
-
-/**
- * Various utility classes for SwiftFS support
- */
-public final class SwiftUtils {
-
-  public static final String READ = "read(buffer, offset, length)";
-
-  /**
-   * Join two (non null) paths, inserting a forward slash between them
-   * if needed
-   *
-   * @param path1 first path
-   * @param path2 second path
-   * @return the combined path
-   */
-  public static String joinPaths(String path1, String path2) {
-    StringBuilder result =
-            new StringBuilder(path1.length() + path2.length() + 1);
-    result.append(path1);
-    boolean insertSlash = true;
-    if (path1.endsWith("/")) {
-      insertSlash = false;
-    } else if (path2.startsWith("/")) {
-      insertSlash = false;
-    }
-    if (insertSlash) {
-      result.append("/");
-    }
-    result.append(path2);
-    return result.toString();
-  }
-
-  /**
-   * This test contains the is-directory logic for Swift, so if
-   * changed there is only one place for it.
-   *
-   * @param fileStatus status to examine
-   * @return true if we consider this status to be representative of a
-   *         directory.
-   */
-  public static boolean isDirectory(FileStatus fileStatus) {
-    return fileStatus.isDirectory() || isFilePretendingToBeDirectory(fileStatus);
-  }
-
-  /**
-   * Test for the entry being a file that is treated as if it is a
-   * directory
-   *
-   * @param fileStatus status
-   * @return true if it meets the rules for being a directory
-   */
-  public static boolean isFilePretendingToBeDirectory(FileStatus fileStatus) {
-    return fileStatus.getLen() == 0;
-  }
-
-  /**
-   * Predicate: Is a swift object referring to the root directory?
-   * @param swiftObject object to probe
-   * @return true iff the object refers to the root
-   */
-  public static boolean isRootDir(SwiftObjectPath swiftObject) {
-    return swiftObject.objectMatches("") || swiftObject.objectMatches("/");
-  }
-
-  /**
-   * Sprintf() to the log iff the log is at debug level. If the log
-   * is not at debug level, the printf operation is skipped, so
-   * no time is spent generating the string.
-   * @param log log to use
-   * @param text text message
-   * @param args args arguments to the print statement
-   */
-  public static void debug(Logger log, String text, Object... args) {
-    if (log.isDebugEnabled()) {
-      log.debug(String.format(text, args));
-    }
-  }
-
-  /**
-   * Log an exception (in text and trace) iff the log is at debug
-   * @param log Log to use
-   * @param text text message
-   * @param ex exception
-   */
-  public static void debugEx(Logger log, String text, Exception ex) {
-    if (log.isDebugEnabled()) {
-      log.debug(text + ex, ex);
-    }
-  }
-
-  /**
-   * Sprintf() to the log iff the log is at trace level. If the log
-   * is not at trace level, the printf operation is skipped, so
-   * no time is spent generating the string.
-   * @param log log to use
-   * @param text text message
-   * @param args args arguments to the print statement
-   */
-  public static void trace(Logger log, String text, Object... args) {
-    if (log.isTraceEnabled()) {
-      log.trace(String.format(text, args));
-    }
-  }
-
-  /**
-   * Given a partition number, calculate the partition value.
-   * This is used in the SwiftNativeOutputStream, and is placed
-   * here for tests to be able to calculate the filename of
-   * a partition.
-   * @param partNumber part number
-   * @return a string to use as the filename
-   */
-  public static String partitionFilenameFromNumber(int partNumber) {
-    return String.format("%06d", partNumber);
-  }
-
-  /**
-   * List a a path to string
-   * @param fileSystem filesystem
-   * @param path directory
-   * @return a listing of the filestatuses of elements in the directory, one
-   * to a line, preceded by the full path of the directory
-   * @throws IOException connectivity problems
-   */
-  public static String ls(FileSystem fileSystem, Path path) throws
-                                                            IOException {
-    if (path == null) {
-      //surfaces when someone calls getParent() on something at the top of the path
-      return "/";
-    }
-    FileStatus[] stats;
-    String pathtext = "ls " + path;
-    try {
-      stats = fileSystem.listStatus(path);
-    } catch (FileNotFoundException e) {
-      return pathtext + " -file not found";
-    } catch (IOException e) {
-      return pathtext + " -failed: " + e;
-    }
-    return pathtext + fileStatsToString(stats, "\n");
-  }
-
-  /**
-   * Take an array of filestatus and convert to a string (prefixed w/ a [01] counter
-   * @param stats array of stats
-   * @param separator separator after every entry
-   * @return a stringified set
-   */
-  public static String fileStatsToString(FileStatus[] stats, String separator) {
-    StringBuilder buf = new StringBuilder(stats.length * 128);
-    for (int i = 0; i < stats.length; i++) {
-      buf.append(String.format("[%02d] %s", i, stats[i])).append(separator);
-    }
-    return buf.toString();
-  }
-
-  /**
-   * Verify that the basic args to a read operation are valid;
-   * throws an exception if not -with meaningful text including
-   * @param buffer destination buffer
-   * @param off offset
-   * @param len number of bytes to read
-   * @throws NullPointerException null buffer
-   * @throws IndexOutOfBoundsException on any invalid range.
-   */
-  public static void validateReadArgs(byte[] buffer, int off, int len) {
-    if (buffer == null) {
-      throw new NullPointerException("Null byte array in"+ READ);
-    }
-    if (off < 0 ) {
-      throw new IndexOutOfBoundsException("Negative buffer offset "
-                                          + off
-                                          + " in " + READ);
-    }
-    if (len < 0 ) {
-      throw new IndexOutOfBoundsException("Negative read length "
-                                          + len
-                                          + " in " + READ);
-    }
-    if (off > buffer.length) {
-      throw new IndexOutOfBoundsException("Buffer offset of "
-                                          + off
-                                          + "beyond buffer size of "
-                                          + buffer.length
-                                          + " in " + READ);
-    }
-  } 
-}
diff --git a/hadoop-tools/hadoop-openstack/src/site/markdown/index.md b/hadoop-tools/hadoop-openstack/src/site/markdown/index.md
deleted file mode 100644
index 1815f60c613..00000000000
--- a/hadoop-tools/hadoop-openstack/src/site/markdown/index.md
+++ /dev/null
@@ -1,549 +0,0 @@
-<!---
-  Licensed under the Apache License, Version 2.0 (the "License");
-  you may not use this file except in compliance with the License.
-  You may obtain a copy of the License at
-
-   http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License. See accompanying LICENSE file.
--->
-
-* [Hadoop OpenStack Support: Swift Object Store](#Hadoop_OpenStack_Support:_Swift_Object_Store)
-    * [Introduction](#Introduction)
-    * [Features](#Features)
-    * [Using the Hadoop Swift Filesystem Client](#Using_the_Hadoop_Swift_Filesystem_Client)
-        * [Concepts: services and containers](#Concepts:_services_and_containers)
-        * [Containers and Objects](#Containers_and_Objects)
-        * [Eventual Consistency](#Eventual_Consistency)
-        * [Non-atomic "directory" operations.](#Non-atomic_directory_operations.)
-    * [Working with Swift Object Stores in Hadoop](#Working_with_Swift_Object_Stores_in_Hadoop)
-        * [Swift Filesystem URIs](#Swift_Filesystem_URIs)
-        * [Installing](#Installing)
-        * [Configuring](#Configuring)
-            * [Example: Rackspace US, in-cluster access using API key](#Example:_Rackspace_US_in-cluster_access_using_API_key)
-            * [Example: Rackspace UK: remote access with password authentication](#Example:_Rackspace_UK:_remote_access_with_password_authentication)
-            * [Example: HP cloud service definition](#Example:_HP_cloud_service_definition)
-        * [General Swift Filesystem configuration options](#General_Swift_Filesystem_configuration_options)
-            * [Blocksize fs.swift.blocksize](#Blocksize_fs.swift.blocksize)
-            * [Partition size fs.swift.partsize](#Partition_size_fs.swift.partsize)
-            * [Request size fs.swift.requestsize](#Request_size_fs.swift.requestsize)
-            * [Connection timeout fs.swift.connect.timeout](#Connection_timeout_fs.swift.connect.timeout)
-            * [Connection timeout fs.swift.socket.timeout](#Connection_timeout_fs.swift.socket.timeout)
-            * [Connection Retry Count fs.swift.connect.retry.count](#Connection_Retry_Count_fs.swift.connect.retry.count)
-            * [Connection Throttle Delay fs.swift.connect.throttle.delay](#Connection_Throttle_Delay_fs.swift.connect.throttle.delay)
-            * [HTTP Proxy](#HTTP_Proxy)
-        * [Troubleshooting](#Troubleshooting)
-            * [ClassNotFoundException](#ClassNotFoundException)
-            * [Failure to Authenticate](#Failure_to_Authenticate)
-            * [Timeout connecting to the Swift Service](#Timeout_connecting_to_the_Swift_Service)
-        * [Warnings](#Warnings)
... 5749 lines suppressed ...


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org