You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@knox.apache.org by mo...@apache.org on 2018/01/09 19:51:52 UTC

[01/16] knox git commit: KNOX-1145 - Upgrade Jackson due to CVE-2017-7525

Repository: knox
Updated Branches:
  refs/heads/KNOX-998-Package_Restructuring e766b3b77 -> 92e2ec59a


KNOX-1145 - Upgrade Jackson due to CVE-2017-7525


Project: http://git-wip-us.apache.org/repos/asf/knox/repo
Commit: http://git-wip-us.apache.org/repos/asf/knox/commit/c65eee25
Tree: http://git-wip-us.apache.org/repos/asf/knox/tree/c65eee25
Diff: http://git-wip-us.apache.org/repos/asf/knox/diff/c65eee25

Branch: refs/heads/KNOX-998-Package_Restructuring
Commit: c65eee251600ac487fb2d5f7f749a0180ccf788b
Parents: 370c861
Author: Colm O hEigeartaigh <co...@apache.org>
Authored: Mon Dec 18 14:43:21 2017 +0000
Committer: Colm O hEigeartaigh <co...@apache.org>
Committed: Mon Dec 18 14:43:21 2017 +0000

----------------------------------------------------------------------
 gateway-server/pom.xml               | 2 +-
 gateway-service-remoteconfig/pom.xml | 5 -----
 pom.xml                              | 3 ++-
 3 files changed, 3 insertions(+), 7 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/knox/blob/c65eee25/gateway-server/pom.xml
----------------------------------------------------------------------
diff --git a/gateway-server/pom.xml b/gateway-server/pom.xml
index 0a43584..b559ba2 100644
--- a/gateway-server/pom.xml
+++ b/gateway-server/pom.xml
@@ -255,7 +255,7 @@
         <dependency>
             <groupId>com.fasterxml.jackson.dataformat</groupId>
             <artifactId>jackson-dataformat-yaml</artifactId>
-            <version>2.3.0</version>
+            <version>${jackson.version}</version>
         </dependency>
 
         <!-- ********** ********** ********** ********** ********** ********** -->

http://git-wip-us.apache.org/repos/asf/knox/blob/c65eee25/gateway-service-remoteconfig/pom.xml
----------------------------------------------------------------------
diff --git a/gateway-service-remoteconfig/pom.xml b/gateway-service-remoteconfig/pom.xml
index 8d06360..cc22b04 100644
--- a/gateway-service-remoteconfig/pom.xml
+++ b/gateway-service-remoteconfig/pom.xml
@@ -78,11 +78,6 @@
             <artifactId>curator-test</artifactId>
             <scope>test</scope>
         </dependency>
-        <dependency>
-            <groupId>org.apache.curator</groupId>
-            <artifactId>curator-test</artifactId>
-            <scope>test</scope>
-        </dependency>
 
     </dependencies>
 

http://git-wip-us.apache.org/repos/asf/knox/blob/c65eee25/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index a55acd6..dee7279 100644
--- a/pom.xml
+++ b/pom.xml
@@ -113,6 +113,7 @@
         <gateway-group>org.apache.knox</gateway-group>
         <groovy-version>2.4.6</groovy-version>
         <hadoop-version>2.7.3</hadoop-version>
+        <jackson.version>2.8.10</jackson.version>
         <jetty-version>9.2.15.v20160210</jetty-version>
         <surefire-version>2.16</surefire-version>
         <failsafe-version>2.19.1</failsafe-version>
@@ -1017,7 +1018,7 @@
             <dependency>
                 <groupId>com.fasterxml.jackson.core</groupId>
                 <artifactId>jackson-databind</artifactId>
-                <version>2.2.2</version>
+                <version>${jackson.version}</version>
             </dependency>
 
             <dependency>


[10/16] knox git commit: KNOX-1161 - Update hadoop dependencies to Hadoop 3 (Colm O hEigeartaigh, reviewed by Sandeep More)

Posted by mo...@apache.org.
KNOX-1161 - Update hadoop dependencies to Hadoop 3 (Colm O hEigeartaigh, reviewed by Sandeep More)


Project: http://git-wip-us.apache.org/repos/asf/knox/repo
Commit: http://git-wip-us.apache.org/repos/asf/knox/commit/99e6a54a
Tree: http://git-wip-us.apache.org/repos/asf/knox/tree/99e6a54a
Diff: http://git-wip-us.apache.org/repos/asf/knox/diff/99e6a54a

Branch: refs/heads/KNOX-998-Package_Restructuring
Commit: 99e6a54afb53fdd8b4311f9a5892698d2551c635
Parents: 772bc33
Author: Colm O hEigeartaigh <co...@apache.org>
Authored: Tue Jan 9 16:31:29 2018 +0000
Committer: Colm O hEigeartaigh <co...@apache.org>
Committed: Tue Jan 9 16:31:29 2018 +0000

----------------------------------------------------------------------
 LICENSE                          | 40 ++++++++++++++++++++++++++++++++++-
 gateway-release/src/assembly.xml |  1 +
 gateway-test-release/pom.xml     | 13 +++---------
 pom.xml                          | 16 ++------------
 4 files changed, 45 insertions(+), 25 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/knox/blob/99e6a54a/LICENSE
----------------------------------------------------------------------
diff --git a/LICENSE b/LICENSE
index 3f69cac..218b998 100644
--- a/LICENSE
+++ b/LICENSE
@@ -1331,4 +1331,42 @@ IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY
 OF SUCH DAMAGE.
 
 Julian Seward, Cambridge, UK.
-jseward@acm.org
\ No newline at end of file
+jseward@acm.org
+
+------------------------------------------------------------------------------
+RE2J License (BSD 3-clause)
+------------------------------------------------------------------------------
+
+This is a work derived from Russ Cox's RE2 in Go, whose license
+http://golang.org/LICENSE is as follows:
+
+Copyright (c) 2009 The Go Authors. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+   * Redistributions of source code must retain the above copyright
+     notice, this list of conditions and the following disclaimer.
+
+   * Redistributions in binary form must reproduce the above copyright
+     notice, this list of conditions and the following disclaimer in
+     the documentation and/or other materials provided with the
+     distribution.
+
+   * Neither the name of Google Inc. nor the names of its contributors
+     may be used to endorse or promote products derived from this
+     software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+

http://git-wip-us.apache.org/repos/asf/knox/blob/99e6a54a/gateway-release/src/assembly.xml
----------------------------------------------------------------------
diff --git a/gateway-release/src/assembly.xml b/gateway-release/src/assembly.xml
index 83a5428..af6237f 100644
--- a/gateway-release/src/assembly.xml
+++ b/gateway-release/src/assembly.xml
@@ -93,6 +93,7 @@
             <excludes>
                 <exclude>${gateway-group}:gateway-*</exclude>
                 <exclude>${gateway-group}:hadoop-examples</exclude>
+                <exclude>org.apache.kerby:*</exclude>
             </excludes>
         </dependencySet>
         <dependencySet>

http://git-wip-us.apache.org/repos/asf/knox/blob/99e6a54a/gateway-test-release/pom.xml
----------------------------------------------------------------------
diff --git a/gateway-test-release/pom.xml b/gateway-test-release/pom.xml
index 48def02..d51bc97 100644
--- a/gateway-test-release/pom.xml
+++ b/gateway-test-release/pom.xml
@@ -36,8 +36,6 @@
 
     <properties>
         <jetty.version>9.3.19.v20170502</jetty.version>
-        <mockito.version>1.8.4</mockito.version>
-        <jackson2.version>2.7.8</jackson2.version>
     </properties>
 
 
@@ -46,13 +44,13 @@
         <dependency>
             <groupId>com.fasterxml.jackson.core</groupId>
             <artifactId>jackson-databind</artifactId>
-            <version>${jackson2.version}</version>
+            <version>${jackson.version}</version>
         </dependency>
 
         <dependency>
             <groupId>org.mockito</groupId>
             <artifactId>mockito-all</artifactId>
-            <version>${mockito.version}</version>
+            <version>${mockito-version}</version>
             <scope>test</scope>
         </dependency>
 
@@ -143,7 +141,7 @@
         <dependency>
             <groupId>org.apache.hadoop</groupId>
             <artifactId>hadoop-minikdc</artifactId>
-            <version>3.0.0-alpha1</version>
+            <version>${hadoop-version}</version>
             <scope>test</scope>
         </dependency>
         
@@ -224,11 +222,6 @@
         </dependency>
 
         <dependency>
-            <groupId>org.apache.kerby</groupId>
-            <artifactId>kerb-simplekdc</artifactId>
-        </dependency>
-
-        <dependency>
             <groupId>junit</groupId>
             <artifactId>junit</artifactId>
             <scope>test</scope>

http://git-wip-us.apache.org/repos/asf/knox/blob/99e6a54a/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index 6bd9396..c6ee580 100644
--- a/pom.xml
+++ b/pom.xml
@@ -115,15 +115,14 @@
         <hadoop-version>3.0.0</hadoop-version>
         <jackson.version>2.8.10</jackson.version>
         <jetty-version>9.2.15.v20160210</jetty-version>
+        <mockito-version>1.10.19</mockito-version>
         <surefire-version>2.16</surefire-version>
         <failsafe-version>2.19.1</failsafe-version>
         <apacheds-version>2.0.0-M16</apacheds-version>
         <javax-websocket-version>1.1</javax-websocket-version>
         <metrics-version>3.1.2</metrics-version>
         <shiro.version>1.2.6</shiro.version>
-        <kerb-simplekdc-version>1.0.0-RC2</kerb-simplekdc-version>
         <commons-beanutils-version>1.9.3</commons-beanutils-version>
-        <jackson2.version>2.2.2</jackson2.version>
     </properties>
 
     <licenses>
@@ -1024,11 +1023,6 @@
                         <artifactId>jersey-servlet</artifactId>
                     </exclusion>
 
-                    <exclusion>
-                        <groupId>org.apache.kerby</groupId>
-                        <artifactId>kerb-simplekdc</artifactId>
-                    </exclusion>
-
                     <!--
                     <exclusion>
                         <groupId>com.fasterxml.jackson.core</groupId>
@@ -1040,12 +1034,6 @@
             </dependency>
 
             <dependency>
-                <groupId>org.apache.kerby</groupId>
-                <artifactId>kerb-simplekdc</artifactId>
-                <version>${kerb-simplekdc-version}</version>
-            </dependency>
-
-            <dependency>
                 <groupId>com.fasterxml.jackson.core</groupId>
                 <artifactId>jackson-databind</artifactId>
                 <version>${jackson.version}</version>
@@ -1354,7 +1342,7 @@
             <dependency>
                 <groupId>org.mockito</groupId>
                 <artifactId>mockito-core</artifactId>
-                <version>1.10.19</version>
+                <version>${mockito-version}</version>
                 <scope>test</scope>
             </dependency>
 


[11/16] knox git commit: Merge branch 'master' into KNOX-998-Package_Restructuring

Posted by mo...@apache.org.
http://git-wip-us.apache.org/repos/asf/knox/blob/e5fd0622/gateway-server/src/test/java/org/apache/knox/gateway/topology/monitor/ZooKeeperConfigurationMonitorTest.java
----------------------------------------------------------------------
diff --cc gateway-server/src/test/java/org/apache/knox/gateway/topology/monitor/ZooKeeperConfigurationMonitorTest.java
index 75cd5d0,0000000..2e753f1
mode 100644,000000..100644
--- a/gateway-server/src/test/java/org/apache/knox/gateway/topology/monitor/ZooKeeperConfigurationMonitorTest.java
+++ b/gateway-server/src/test/java/org/apache/knox/gateway/topology/monitor/ZooKeeperConfigurationMonitorTest.java
@@@ -1,355 -1,0 +1,368 @@@
 +/**
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements. See the NOTICE file distributed with this
 + * work for additional information regarding copyright ownership. The ASF
 + * licenses this file to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance with the License.
 + * You may obtain a copy of the License at
 + * <p>
 + * http://www.apache.org/licenses/LICENSE-2.0
 + * <p>
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
 + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
 + * License for the specific language governing permissions and limitations under
 + * the License.
 + */
 +package org.apache.knox.gateway.topology.monitor;
 +
 +import org.apache.commons.io.FileUtils;
 +import org.apache.curator.framework.CuratorFramework;
 +import org.apache.curator.framework.CuratorFrameworkFactory;
 +import org.apache.curator.retry.ExponentialBackoffRetry;
 +import org.apache.curator.test.InstanceSpec;
 +import org.apache.curator.test.TestingCluster;
 +import org.apache.knox.gateway.config.GatewayConfig;
 +import org.apache.knox.gateway.service.config.remote.zk.ZooKeeperClientService;
 +import org.apache.knox.gateway.service.config.remote.zk.ZooKeeperClientServiceProvider;
 +import org.apache.knox.gateway.services.config.client.RemoteConfigurationRegistryClientService;
 +import org.apache.knox.gateway.services.security.AliasService;
 +import org.apache.knox.test.TestUtils;
 +import org.apache.zookeeper.CreateMode;
 +import org.apache.zookeeper.ZooDefs;
 +import org.apache.zookeeper.data.ACL;
 +import org.easymock.EasyMock;
 +import org.junit.AfterClass;
 +import org.junit.BeforeClass;
 +import org.junit.Test;
 +
 +import java.io.File;
 +import java.util.ArrayList;
 +import java.util.Collections;
 +import java.util.HashMap;
 +import java.util.List;
 +import java.util.Map;
 +
 +import static org.junit.Assert.assertEquals;
 +import static org.junit.Assert.assertFalse;
 +import static org.junit.Assert.assertNotNull;
 +import static org.junit.Assert.assertTrue;
 +import static org.junit.Assert.fail;
 +
 +/**
 + * Test the ZooKeeperConfigMonitor WITHOUT SASL configured or znode ACLs applied.
 + * The implementation of the monitor is the same regardless, since the ACLs are defined by the ZooKeeper znode
 + * creator, and the SASL config is purely JAAS (and external to the implementation).
 + */
 +public class ZooKeeperConfigurationMonitorTest {
 +
 +    private static final String PATH_KNOX = "/knox";
 +    private static final String PATH_KNOX_CONFIG = PATH_KNOX + "/config";
 +    private static final String PATH_KNOX_PROVIDERS = PATH_KNOX_CONFIG + "/shared-providers";
 +    private static final String PATH_KNOX_DESCRIPTORS = PATH_KNOX_CONFIG + "/descriptors";
 +
 +    private static File testTmp;
 +    private static File providersDir;
 +    private static File descriptorsDir;
 +
 +    private static TestingCluster zkCluster;
 +
 +    private static CuratorFramework client;
 +
 +    private GatewayConfig gc;
 +
 +
 +    @BeforeClass
 +    public static void setupSuite() throws Exception {
 +        testTmp = TestUtils.createTempDir(ZooKeeperConfigurationMonitorTest.class.getName());
 +        File confDir   = TestUtils.createTempDir(testTmp + "/conf");
 +        providersDir   = TestUtils.createTempDir(confDir + "/shared-providers");
 +        descriptorsDir = TestUtils.createTempDir(confDir + "/descriptors");
 +
 +        configureAndStartZKCluster();
 +    }
 +
 +    private static void configureAndStartZKCluster() throws Exception {
 +        // Configure security for the ZK cluster instances
 +        Map<String, Object> customInstanceSpecProps = new HashMap<>();
 +        customInstanceSpecProps.put("authProvider.1", "org.apache.zookeeper.server.auth.SASLAuthenticationProvider");
 +        customInstanceSpecProps.put("requireClientAuthScheme", "sasl");
 +
 +        // Define the test cluster
 +        List<InstanceSpec> instanceSpecs = new ArrayList<>();
 +        for (int i = 0 ; i < 3 ; i++) {
 +            InstanceSpec is = new InstanceSpec(null, -1, -1, -1, false, (i+1), -1, -1, customInstanceSpecProps);
 +            instanceSpecs.add(is);
 +        }
 +        zkCluster = new TestingCluster(instanceSpecs);
 +
 +        // Start the cluster
 +        zkCluster.start();
 +
 +        // Create the client for the test cluster
 +        client = CuratorFrameworkFactory.builder()
 +                                        .connectString(zkCluster.getConnectString())
 +                                        .retryPolicy(new ExponentialBackoffRetry(100, 3))
 +                                        .build();
 +        assertNotNull(client);
 +        client.start();
 +
 +        // Create the knox config paths with an ACL for the sasl user configured for the client
 +        List<ACL> acls = new ArrayList<>();
 +        acls.add(new ACL(ZooDefs.Perms.ALL, ZooDefs.Ids.ANYONE_ID_UNSAFE));
 +
 +        client.create().creatingParentsIfNeeded().withMode(CreateMode.PERSISTENT).withACL(acls).forPath(PATH_KNOX_DESCRIPTORS);
 +        assertNotNull("Failed to create node:" + PATH_KNOX_DESCRIPTORS,
-                 client.checkExists().forPath(PATH_KNOX_DESCRIPTORS));
++                      client.checkExists().forPath(PATH_KNOX_DESCRIPTORS));
 +        client.create().creatingParentsIfNeeded().withMode(CreateMode.PERSISTENT).withACL(acls).forPath(PATH_KNOX_PROVIDERS);
 +        assertNotNull("Failed to create node:" + PATH_KNOX_PROVIDERS,
-                 client.checkExists().forPath(PATH_KNOX_PROVIDERS));
++                      client.checkExists().forPath(PATH_KNOX_PROVIDERS));
 +    }
 +
 +    @AfterClass
 +    public static void tearDownSuite() throws Exception {
 +        // Clean up the ZK nodes, and close the client
 +        if (client != null) {
 +            client.delete().deletingChildrenIfNeeded().forPath(PATH_KNOX);
 +            client.close();
 +        }
 +
 +        // Shutdown the ZK cluster
 +        zkCluster.close();
 +
 +        // Delete the working dir
 +        testTmp.delete();
 +    }
 +
 +    @Test
 +    public void testZooKeeperConfigMonitor() throws Exception {
 +        String configMonitorName = "remoteConfigMonitorClient";
 +
 +        // Setup the base GatewayConfig mock
 +        gc = EasyMock.createNiceMock(GatewayConfig.class);
 +        EasyMock.expect(gc.getGatewayProvidersConfigDir()).andReturn(providersDir.getAbsolutePath()).anyTimes();
 +        EasyMock.expect(gc.getGatewayDescriptorsDir()).andReturn(descriptorsDir.getAbsolutePath()).anyTimes();
 +        EasyMock.expect(gc.getRemoteRegistryConfigurationNames())
 +                .andReturn(Collections.singletonList(configMonitorName))
 +                .anyTimes();
 +        final String registryConfig =
 +                                GatewayConfig.REMOTE_CONFIG_REGISTRY_TYPE + "=" + ZooKeeperClientService.TYPE + ";" +
 +                                GatewayConfig.REMOTE_CONFIG_REGISTRY_ADDRESS + "=" + zkCluster.getConnectString();
 +        EasyMock.expect(gc.getRemoteRegistryConfiguration(configMonitorName))
 +                .andReturn(registryConfig)
 +                .anyTimes();
 +        EasyMock.expect(gc.getRemoteConfigurationMonitorClientName()).andReturn(configMonitorName).anyTimes();
 +        EasyMock.replay(gc);
 +
 +        AliasService aliasService = EasyMock.createNiceMock(AliasService.class);
 +        EasyMock.replay(aliasService);
 +
 +        RemoteConfigurationRegistryClientService clientService = (new ZooKeeperClientServiceProvider()).newInstance();
 +        clientService.setAliasService(aliasService);
 +        clientService.init(gc, Collections.emptyMap());
 +        clientService.start();
 +
 +        DefaultRemoteConfigurationMonitor cm = new DefaultRemoteConfigurationMonitor(gc, clientService);
 +
++        // Create a provider configuration in the test ZK, prior to starting the monitor, to make sure that the monitor
++        // will download existing entries upon starting.
++        final String preExistingProviderConfig = getProviderPath("pre-existing-providers.xml");
++        client.create().withMode(CreateMode.PERSISTENT).forPath(preExistingProviderConfig,
++                                                                TEST_PROVIDERS_CONFIG_1.getBytes());
++        File preExistingProviderConfigLocalFile = new File(providersDir, "pre-existing-providers.xml");
++        assertFalse("This file should not exist locally prior to monitor starting.",
++                    preExistingProviderConfigLocalFile.exists());
++
 +        try {
 +            cm.start();
 +        } catch (Exception e) {
 +            fail("Failed to start monitor: " + e.getMessage());
 +        }
 +
++        assertTrue("This file should exist locally immediately after monitor starting.",
++                    preExistingProviderConfigLocalFile.exists());
++
++
 +        try {
 +            final String pc_one_znode = getProviderPath("providers-config1.xml");
 +            final File pc_one         = new File(providersDir, "providers-config1.xml");
 +            final String pc_two_znode = getProviderPath("providers-config2.xml");
 +            final File pc_two         = new File(providersDir, "providers-config2.xml");
 +
 +            client.create().withMode(CreateMode.PERSISTENT).forPath(pc_one_znode, TEST_PROVIDERS_CONFIG_1.getBytes());
 +            Thread.sleep(100);
 +            assertTrue(pc_one.exists());
 +            assertEquals(TEST_PROVIDERS_CONFIG_1, FileUtils.readFileToString(pc_one));
 +
 +            client.create().withMode(CreateMode.PERSISTENT).forPath(getProviderPath("providers-config2.xml"), TEST_PROVIDERS_CONFIG_2.getBytes());
 +            Thread.sleep(100);
 +            assertTrue(pc_two.exists());
 +            assertEquals(TEST_PROVIDERS_CONFIG_2, FileUtils.readFileToString(pc_two));
 +
 +            client.setData().forPath(pc_two_znode, TEST_PROVIDERS_CONFIG_1.getBytes());
 +            Thread.sleep(100);
 +            assertTrue(pc_two.exists());
 +            assertEquals(TEST_PROVIDERS_CONFIG_1, FileUtils.readFileToString(pc_two));
 +
 +            client.delete().forPath(pc_two_znode);
 +            Thread.sleep(100);
 +            assertFalse(pc_two.exists());
 +
 +            client.delete().forPath(pc_one_znode);
 +            Thread.sleep(100);
 +            assertFalse(pc_one.exists());
 +
 +            final String desc_one_znode   = getDescriptorPath("test1.json");
 +            final String desc_two_znode   = getDescriptorPath("test2.json");
 +            final String desc_three_znode = getDescriptorPath("test3.json");
 +            final File desc_one           = new File(descriptorsDir, "test1.json");
 +            final File desc_two           = new File(descriptorsDir, "test2.json");
 +            final File desc_three         = new File(descriptorsDir, "test3.json");
 +
 +            client.create().withMode(CreateMode.PERSISTENT).forPath(desc_one_znode, TEST_DESCRIPTOR_1.getBytes());
 +            Thread.sleep(100);
 +            assertTrue(desc_one.exists());
 +            assertEquals(TEST_DESCRIPTOR_1, FileUtils.readFileToString(desc_one));
 +
 +            client.create().withMode(CreateMode.PERSISTENT).forPath(desc_two_znode, TEST_DESCRIPTOR_1.getBytes());
 +            Thread.sleep(100);
 +            assertTrue(desc_two.exists());
 +            assertEquals(TEST_DESCRIPTOR_1, FileUtils.readFileToString(desc_two));
 +
 +            client.setData().forPath(desc_two_znode, TEST_DESCRIPTOR_2.getBytes());
 +            Thread.sleep(100);
 +            assertTrue(desc_two.exists());
 +            assertEquals(TEST_DESCRIPTOR_2, FileUtils.readFileToString(desc_two));
 +
 +            client.create().withMode(CreateMode.PERSISTENT).forPath(desc_three_znode, TEST_DESCRIPTOR_1.getBytes());
 +            Thread.sleep(100);
 +            assertTrue(desc_three.exists());
 +            assertEquals(TEST_DESCRIPTOR_1, FileUtils.readFileToString(desc_three));
 +
 +            client.delete().forPath(desc_two_znode);
 +            Thread.sleep(100);
 +            assertFalse("Expected test2.json to have been deleted.", desc_two.exists());
 +
 +            client.delete().forPath(desc_three_znode);
 +            Thread.sleep(100);
 +            assertFalse(desc_three.exists());
 +
 +            client.delete().forPath(desc_one_znode);
 +            Thread.sleep(100);
 +            assertFalse(desc_one.exists());
 +        } finally {
 +            cm.stop();
 +        }
 +    }
 +
 +    private static String getDescriptorPath(String descriptorName) {
 +        return PATH_KNOX_DESCRIPTORS + "/" + descriptorName;
 +    }
 +
 +    private static String getProviderPath(String providerConfigName) {
 +        return PATH_KNOX_PROVIDERS + "/" + providerConfigName;
 +    }
 +
 +
 +    private static final String TEST_PROVIDERS_CONFIG_1 =
 +            "<gateway>\n" +
 +            "    <provider>\n" +
 +            "        <role>identity-assertion</role>\n" +
 +            "        <name>Default</name>\n" +
 +            "        <enabled>true</enabled>\n" +
 +            "    </provider>\n" +
 +            "    <provider>\n" +
 +            "        <role>hostmap</role>\n" +
 +            "        <name>static</name>\n" +
 +            "        <enabled>true</enabled>\n" +
 +            "        <param><name>localhost</name><value>sandbox,sandbox.hortonworks.com</value></param>\n" +
 +            "    </provider>\n" +
 +            "</gateway>\n";
 +
 +    private static final String TEST_PROVIDERS_CONFIG_2 =
 +            "<gateway>\n" +
 +            "    <provider>\n" +
 +            "        <role>authentication</role>\n" +
 +            "        <name>ShiroProvider</name>\n" +
 +            "        <enabled>true</enabled>\n" +
 +            "        <param>\n" +
 +            "            <name>sessionTimeout</name>\n" +
 +            "            <value>30</value>\n" +
 +            "        </param>\n" +
 +            "        <param>\n" +
 +            "            <name>main.ldapRealm</name>\n" +
 +            "            <value>org.apache.knox.gateway.shirorealm.KnoxLdapRealm</value>\n" +
 +            "        </param>\n" +
 +            "        <param>\n" +
 +            "            <name>main.ldapContextFactory</name>\n" +
 +            "            <value>org.apache.knox.gateway.shirorealm.KnoxLdapContextFactory</value>\n" +
 +            "        </param>\n" +
 +            "        <param>\n" +
 +            "            <name>main.ldapRealm.contextFactory</name>\n" +
 +            "            <value>$ldapContextFactory</value>\n" +
 +            "        </param>\n" +
 +            "        <param>\n" +
 +            "            <name>main.ldapRealm.userDnTemplate</name>\n" +
 +            "            <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>\n" +
 +            "        </param>\n" +
 +            "        <param>\n" +
 +            "            <name>main.ldapRealm.contextFactory.url</name>\n" +
 +            "            <value>ldap://localhost:33389</value>\n" +
 +            "        </param>\n" +
 +            "        <param>\n" +
 +            "            <name>main.ldapRealm.contextFactory.authenticationMechanism</name>\n" +
 +            "            <value>simple</value>\n" +
 +            "        </param>\n" +
 +            "        <param>\n" +
 +            "            <name>urls./**</name>\n" +
 +            "            <value>authcBasic</value>\n" +
 +            "        </param>\n" +
 +            "    </provider>\n" +
 +            "</gateway>\n";
 +
 +    private static final String TEST_DESCRIPTOR_1 =
 +            "{\n" +
 +            "  \"discovery-type\":\"AMBARI\",\n" +
 +            "  \"discovery-address\":\"http://sandbox.hortonworks.com:8080\",\n" +
 +            "  \"discovery-user\":\"maria_dev\",\n" +
 +            "  \"discovery-pwd-alias\":\"sandbox.ambari.discovery.password\",\n" +
 +            "  \"provider-config-ref\":\"sandbox-providers.xml\",\n" +
 +            "  \"cluster\":\"Sandbox\",\n" +
 +            "  \"services\":[\n" +
 +            "    {\"name\":\"NODEUI\"},\n" +
 +            "    {\"name\":\"YARNUI\"},\n" +
 +            "    {\"name\":\"HDFSUI\"},\n" +
 +            "    {\"name\":\"OOZIEUI\"},\n" +
 +            "    {\"name\":\"HBASEUI\"},\n" +
 +            "    {\"name\":\"NAMENODE\"},\n" +
 +            "    {\"name\":\"JOBTRACKER\"},\n" +
 +            "    {\"name\":\"WEBHDFS\"},\n" +
 +            "    {\"name\":\"WEBHCAT\"},\n" +
 +            "    {\"name\":\"OOZIE\"},\n" +
 +            "    {\"name\":\"WEBHBASE\"},\n" +
 +            "    {\"name\":\"RESOURCEMANAGER\"},\n" +
 +            "    {\"name\":\"AMBARI\", \"urls\":[\"http://c6401.ambari.apache.org:8080\"]},\n" +
 +            "    {\"name\":\"AMBARIUI\", \"urls\":[\"http://c6401.ambari.apache.org:8080\"]}\n" +
 +            "  ]\n" +
 +            "}\n";
 +
 +    private static final String TEST_DESCRIPTOR_2 =
 +            "{\n" +
 +            "  \"discovery-type\":\"AMBARI\",\n" +
 +            "  \"discovery-address\":\"http://sandbox.hortonworks.com:8080\",\n" +
 +            "  \"discovery-user\":\"maria_dev\",\n" +
 +            "  \"discovery-pwd-alias\":\"sandbox.ambari.discovery.password\",\n" +
 +            "  \"provider-config-ref\":\"sandbox-providers.xml\",\n" +
 +            "  \"cluster\":\"Sandbox\",\n" +
 +            "  \"services\":[\n" +
 +            "    {\"name\":\"NAMENODE\"},\n" +
 +            "    {\"name\":\"JOBTRACKER\"},\n" +
 +            "    {\"name\":\"WEBHDFS\"},\n" +
 +            "    {\"name\":\"WEBHCAT\"},\n" +
 +            "    {\"name\":\"OOZIE\"},\n" +
 +            "    {\"name\":\"WEBHBASE\"},\n" +
 +            "    {\"name\":\"RESOURCEMANAGER\"}\n" +
 +            "  ]\n" +
 +            "}\n";
 +
 +}

http://git-wip-us.apache.org/repos/asf/knox/blob/e5fd0622/gateway-server/src/test/java/org/apache/knox/gateway/util/KnoxCLITest.java
----------------------------------------------------------------------
diff --cc gateway-server/src/test/java/org/apache/knox/gateway/util/KnoxCLITest.java
index b768937,0000000..116b8dd
mode 100644,000000..100644
--- a/gateway-server/src/test/java/org/apache/knox/gateway/util/KnoxCLITest.java
+++ b/gateway-server/src/test/java/org/apache/knox/gateway/util/KnoxCLITest.java
@@@ -1,1032 -1,0 +1,1048 @@@
 +/**
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.knox.gateway.util;
 +
 +import com.mycila.xmltool.XMLDoc;
 +import com.mycila.xmltool.XMLTag;
 +import org.apache.commons.io.FileUtils;
 +import org.apache.hadoop.conf.Configuration;
 +import org.apache.knox.gateway.config.impl.GatewayConfigImpl;
 +import org.apache.knox.gateway.services.GatewayServices;
 +import org.apache.knox.gateway.services.config.client.RemoteConfigurationRegistryClient;
 +import org.apache.knox.gateway.services.config.client.RemoteConfigurationRegistryClientService;
 +import org.apache.knox.gateway.services.security.AliasService;
 +import org.apache.knox.gateway.services.security.MasterService;
 +import org.apache.knox.test.TestUtils;
 +import org.junit.Before;
 +import org.junit.Test;
 +
 +import java.io.ByteArrayOutputStream;
 +import java.io.File;
 +import java.io.FileOutputStream;
 +import java.io.IOException;
 +import java.io.PrintStream;
 +import java.net.URL;
 +import java.util.UUID;
 +
 +import static org.hamcrest.CoreMatchers.containsString;
 +import static org.hamcrest.CoreMatchers.is;
 +import static org.hamcrest.CoreMatchers.not;
 +import static org.hamcrest.CoreMatchers.notNullValue;
 +import static org.junit.Assert.assertEquals;
 +import static org.junit.Assert.assertFalse;
 +import static org.junit.Assert.assertNotNull;
 +import static org.junit.Assert.assertNull;
 +import static org.junit.Assert.assertThat;
 +import static org.junit.Assert.assertTrue;
 +
 +/**
 + * @author larry
 + *
 + */
 +public class KnoxCLITest {
 +  private final ByteArrayOutputStream outContent = new ByteArrayOutputStream();
 +  private final ByteArrayOutputStream errContent = new ByteArrayOutputStream();
 +
 +  @Before
 +  public void setup() throws Exception {
 +    System.setOut(new PrintStream(outContent));
 +    System.setErr(new PrintStream(errContent));
 +  }
 +
 +  @Test
 +  public void testRemoteConfigurationRegistryClientService() throws Exception {
 +    outContent.reset();
 +
 +    KnoxCLI cli = new KnoxCLI();
 +    Configuration config = new GatewayConfigImpl();
 +    // Configure a client for the test local filesystem registry implementation
 +    config.set("gateway.remote.config.registry.test_client", "type=LocalFileSystem;address=/test");
 +    cli.setConf(config);
 +
 +    // This is only to get the gateway services initialized
 +    cli.run(new String[]{"version"});
 +
 +    RemoteConfigurationRegistryClientService service =
 +                                   cli.getGatewayServices().getService(GatewayServices.REMOTE_REGISTRY_CLIENT_SERVICE);
 +    assertNotNull(service);
 +    RemoteConfigurationRegistryClient client = service.get("test_client");
 +    assertNotNull(client);
 +
 +    assertNull(service.get("bogus"));
 +  }
 +
 +  @Test
 +  public void testListRemoteConfigurationRegistryClients() throws Exception {
 +    outContent.reset();
 +
 +    KnoxCLI cli = new KnoxCLI();
 +    String[] args = { "list-registry-clients", "--master","master" };
 +
 +    Configuration config = new GatewayConfigImpl();
 +    cli.setConf(config);
 +
 +    // Test with no registry clients configured
 +    int rc = cli.run(args);
 +    assertEquals(0, rc);
 +    assertTrue(outContent.toString(), outContent.toString().isEmpty());
 +
 +    // Test with a single client configured
 +    // Configure a client for the test local filesystem registry implementation
 +    config.set("gateway.remote.config.registry.test_client", "type=LocalFileSystem;address=/test1");
 +    cli.setConf(config);
 +    outContent.reset();
 +    rc = cli.run(args);
 +    assertEquals(0, rc);
 +    assertTrue(outContent.toString(), outContent.toString().contains("test_client"));
 +
 +    // Configure another client for the test local filesystem registry implementation
 +    config.set("gateway.remote.config.registry.another_client", "type=LocalFileSystem;address=/test2");
 +    cli.setConf(config);
 +    outContent.reset();
 +    rc = cli.run(args);
 +    assertEquals(0, rc);
 +    assertTrue(outContent.toString(), outContent.toString().contains("test_client"));
 +    assertTrue(outContent.toString(), outContent.toString().contains("another_client"));
 +  }
 +
 +  @Test
 +  public void testRemoteConfigurationRegistryGetACLs() throws Exception {
 +    outContent.reset();
 +
 +
 +    final File testRoot = TestUtils.createTempDir(this.getClass().getName());
 +    try {
 +      final File testRegistry = new File(testRoot, "registryRoot");
 +
 +      final String providerConfigName = "my-provider-config.xml";
 +      final String providerConfigContent = "<gateway/>\n";
 +      final File testProviderConfig = new File(testRoot, providerConfigName);
 +      final String[] uploadArgs = {"upload-provider-config", testProviderConfig.getAbsolutePath(),
 +                                   "--registry-client", "test_client",
 +                                   "--master", "master"};
 +      FileUtils.writeStringToFile(testProviderConfig, providerConfigContent);
 +
 +
 +      final String[] args = {"get-registry-acl", "/knox/config/shared-providers",
 +                             "--registry-client", "test_client",
 +                             "--master", "master"};
 +
 +      KnoxCLI cli = new KnoxCLI();
 +      Configuration config = new GatewayConfigImpl();
 +      // Configure a client for the test local filesystem registry implementation
 +      config.set("gateway.remote.config.registry.test_client", "type=LocalFileSystem;address=" + testRegistry);
 +      cli.setConf(config);
 +
 +      int rc = cli.run(uploadArgs);
 +      assertEquals(0, rc);
 +
 +      // Run the test command
 +      rc = cli.run(args);
 +
 +      // Validate the result
 +      assertEquals(0, rc);
 +      String result = outContent.toString();
 +      assertEquals(result, 3, result.split("\n").length);
 +    } finally {
 +      FileUtils.forceDelete(testRoot);
 +    }
 +  }
 +
 +
 +  @Test
 +  public void testRemoteConfigurationRegistryUploadProviderConfig() throws Exception {
 +    outContent.reset();
 +
 +    final String providerConfigName = "my-provider-config.xml";
 +    final String providerConfigContent = "<gateway/>\n";
 +
 +    final File testRoot = TestUtils.createTempDir(this.getClass().getName());
 +    try {
 +      final File testRegistry = new File(testRoot, "registryRoot");
 +      final File testProviderConfig = new File(testRoot, providerConfigName);
 +
 +      final String[] args = {"upload-provider-config", testProviderConfig.getAbsolutePath(),
 +                             "--registry-client", "test_client",
 +                             "--master", "master"};
 +
 +      FileUtils.writeStringToFile(testProviderConfig, providerConfigContent);
 +
 +      KnoxCLI cli = new KnoxCLI();
 +      Configuration config = new GatewayConfigImpl();
 +      // Configure a client for the test local filesystem registry implementation
 +      config.set("gateway.remote.config.registry.test_client", "type=LocalFileSystem;address=" + testRegistry);
 +      cli.setConf(config);
 +
 +      // Run the test command
 +      int rc = cli.run(args);
 +
 +      // Validate the result
 +      assertEquals(0, rc);
++
++      outContent.reset();
++      final String[] listArgs = {"list-provider-configs", "--registry-client", "test_client"};
++      cli.run(listArgs);
++      String outStr =  outContent.toString().trim();
++      assertTrue(outStr.startsWith("Provider Configurations"));
++      assertTrue(outStr.endsWith(")\n"+providerConfigName));
++
 +      File registryFile = new File(testRegistry, "knox/config/shared-providers/" + providerConfigName);
 +      assertTrue(registryFile.exists());
 +      assertEquals(FileUtils.readFileToString(registryFile), providerConfigContent);
 +    } finally {
 +      FileUtils.forceDelete(testRoot);
 +    }
 +  }
 +
 +
 +  @Test
 +  public void testRemoteConfigurationRegistryUploadProviderConfigWithDestinationOverride() throws Exception {
 +    outContent.reset();
 +
 +    final String providerConfigName = "my-provider-config.xml";
 +    final String entryName = "my-providers.xml";
 +    final String providerConfigContent = "<gateway/>\n";
 +
 +    final File testRoot = TestUtils.createTempDir(this.getClass().getName());
 +    try {
 +      final File testRegistry = new File(testRoot, "registryRoot");
 +      final File testProviderConfig = new File(testRoot, providerConfigName);
 +
 +      final String[] args = {"upload-provider-config", testProviderConfig.getAbsolutePath(),
 +                             "--entry-name", entryName,
 +                             "--registry-client", "test_client",
 +                             "--master", "master"};
 +
 +      FileUtils.writeStringToFile(testProviderConfig, providerConfigContent);
 +
 +      KnoxCLI cli = new KnoxCLI();
 +      Configuration config = new GatewayConfigImpl();
 +      // Configure a client for the test local filesystem registry implementation
 +      config.set("gateway.remote.config.registry.test_client", "type=LocalFileSystem;address=" + testRegistry);
 +      cli.setConf(config);
 +
 +      // Run the test command
 +      int rc = cli.run(args);
 +
 +      // Validate the result
 +      assertEquals(0, rc);
 +      assertFalse((new File(testRegistry, "knox/config/shared-providers/" + providerConfigName)).exists());
 +      File registryFile = new File(testRegistry, "knox/config/shared-providers/" + entryName);
 +      assertTrue(registryFile.exists());
 +      assertEquals(FileUtils.readFileToString(registryFile), providerConfigContent);
 +    } finally {
 +      FileUtils.forceDelete(testRoot);
 +    }
 +  }
 +
 +
 +  @Test
 +  public void testRemoteConfigurationRegistryUploadDescriptor() throws Exception {
 +    outContent.reset();
 +
 +    final String descriptorName = "my-topology.json";
 +    final String descriptorContent = testDescriptorContentJSON;
 +
 +    final File testRoot = TestUtils.createTempDir(this.getClass().getName());
 +    try {
 +      final File testRegistry = new File(testRoot, "registryRoot");
 +      final File testDescriptor = new File(testRoot, descriptorName);
 +
 +      final String[] args = {"upload-descriptor", testDescriptor.getAbsolutePath(),
 +                             "--registry-client", "test_client",
 +                             "--master", "master"};
 +
 +      FileUtils.writeStringToFile(testDescriptor, descriptorContent);
 +
 +      KnoxCLI cli = new KnoxCLI();
 +      Configuration config = new GatewayConfigImpl();
 +      // Configure a client for the test local filesystem registry implementation
 +      config.set("gateway.remote.config.registry.test_client", "type=LocalFileSystem;address=" + testRegistry);
 +      cli.setConf(config);
 +
 +      // Run the test command
 +      int rc = cli.run(args);
 +
 +      // Validate the result
 +      assertEquals(0, rc);
++
++      outContent.reset();
++      final String[] listArgs = {"list-descriptors", "--registry-client", "test_client"};
++      cli.run(listArgs);
++      String outStr =  outContent.toString().trim();
++      assertTrue(outStr.startsWith("Descriptors"));
++      assertTrue(outStr.endsWith(")\n"+descriptorName));
++
 +      File registryFile = new File(testRegistry, "knox/config/descriptors/" + descriptorName);
 +      assertTrue(registryFile.exists());
 +      assertEquals(FileUtils.readFileToString(registryFile), descriptorContent);
 +    } finally {
 +      FileUtils.forceDelete(testRoot);
 +    }
 +  }
 +
 +  @Test
 +  public void testRemoteConfigurationRegistryUploadDescriptorWithDestinationOverride() throws Exception {
 +    outContent.reset();
 +
 +    final String descriptorName = "my-topology.json";
 +    final String entryName = "different-topology.json";
 +    final String descriptorContent = testDescriptorContentJSON;
 +
 +    final File testRoot = TestUtils.createTempDir(this.getClass().getName());
 +    try {
 +      final File testRegistry = new File(testRoot, "registryRoot");
 +      final File testDescriptor = new File(testRoot, descriptorName);
 +
 +      final String[] args = {"upload-descriptor", testDescriptor.getAbsolutePath(),
 +                             "--entry-name", entryName,
 +                             "--registry-client", "test_client",
 +                             "--master", "master"};
 +
 +      FileUtils.writeStringToFile(testDescriptor, descriptorContent);
 +
 +      KnoxCLI cli = new KnoxCLI();
 +      Configuration config = new GatewayConfigImpl();
 +      // Configure a client for the test local filesystem registry implementation
 +      config.set("gateway.remote.config.registry.test_client", "type=LocalFileSystem;address=" + testRegistry);
 +      cli.setConf(config);
 +
 +      // Run the test command
 +      int rc = cli.run(args);
 +
 +      // Validate the result
 +      assertEquals(0, rc);
 +      assertFalse((new File(testRegistry, "knox/config/descriptors/" + descriptorName)).exists());
 +      File registryFile = new File(testRegistry, "knox/config/descriptors/" + entryName);
 +      assertTrue(registryFile.exists());
 +      assertEquals(FileUtils.readFileToString(registryFile), descriptorContent);
 +    } finally {
 +      FileUtils.forceDelete(testRoot);
 +    }
 +  }
 +
 +  @Test
 +  public void testRemoteConfigurationRegistryDeleteProviderConfig() throws Exception {
 +    outContent.reset();
 +
 +    // Create a provider config
 +    final String providerConfigName = "my-provider-config.xml";
 +    final String providerConfigContent = "<gateway/>\n";
 +
 +    final File testRoot = TestUtils.createTempDir(this.getClass().getName());
 +    try {
 +      final File testRegistry = new File(testRoot, "registryRoot");
 +      final File testProviderConfig = new File(testRoot, providerConfigName);
 +
 +      final String[] createArgs = {"upload-provider-config", testProviderConfig.getAbsolutePath(),
 +                                   "--registry-client", "test_client",
 +                                   "--master", "master"};
 +
 +      FileUtils.writeStringToFile(testProviderConfig, providerConfigContent);
 +
 +      KnoxCLI cli = new KnoxCLI();
 +      Configuration config = new GatewayConfigImpl();
 +      // Configure a client for the test local filesystem registry implementation
 +      config.set("gateway.remote.config.registry.test_client", "type=LocalFileSystem;address=" + testRegistry);
 +      cli.setConf(config);
 +
 +      // Run the test command
 +      int rc = cli.run(createArgs);
 +
 +      // Validate the result
 +      assertEquals(0, rc);
 +      File registryFile = new File(testRegistry, "knox/config/shared-providers/" + providerConfigName);
 +      assertTrue(registryFile.exists());
 +
 +      outContent.reset();
 +
 +      // Delete the created provider config
 +      final String[] deleteArgs = {"delete-provider-config", providerConfigName,
 +                                   "--registry-client", "test_client",
 +                                   "--master", "master"};
 +      rc = cli.run(deleteArgs);
 +      assertEquals(0, rc);
 +      assertFalse(registryFile.exists());
 +
 +      // Try to delete a provider config that does not exist
 +      rc = cli.run(new String[]{"delete-provider-config", "imaginary-providers.xml",
 +                                "--registry-client", "test_client",
 +                                "--master", "master"});
 +      assertEquals(0, rc);
 +    } finally {
 +      FileUtils.forceDelete(testRoot);
 +    }
 +  }
 +
 +  @Test
 +  public void testRemoteConfigurationRegistryDeleteDescriptor() throws Exception {
 +    outContent.reset();
 +
 +    final String descriptorName = "my-topology.json";
 +    final String descriptorContent = testDescriptorContentJSON;
 +
 +    final File testRoot = TestUtils.createTempDir(this.getClass().getName());
 +    try {
 +      final File testRegistry = new File(testRoot, "registryRoot");
 +      final File testDescriptor = new File(testRoot, descriptorName);
 +
 +      final String[] createArgs = {"upload-descriptor", testDescriptor.getAbsolutePath(),
 +                             "--registry-client", "test_client",
 +                             "--master", "master"};
 +
 +      FileUtils.writeStringToFile(testDescriptor, descriptorContent);
 +
 +      KnoxCLI cli = new KnoxCLI();
 +      Configuration config = new GatewayConfigImpl();
 +      // Configure a client for the test local filesystem registry implementation
 +      config.set("gateway.remote.config.registry.test_client", "type=LocalFileSystem;address=" + testRegistry);
 +      cli.setConf(config);
 +
 +      // Run the test command
 +      int rc = cli.run(createArgs);
 +
 +      // Validate the result
 +      assertEquals(0, rc);
 +      File registryFile = new File(testRegistry, "knox/config/descriptors/" + descriptorName);
 +      assertTrue(registryFile.exists());
 +
 +      outContent.reset();
 +
 +      // Delete the created provider config
 +      final String[] deleteArgs = {"delete-descriptor", descriptorName,
 +                                   "--registry-client", "test_client",
 +                                   "--master", "master"};
 +      rc = cli.run(deleteArgs);
 +      assertEquals(0, rc);
 +      assertFalse(registryFile.exists());
 +
 +      // Try to delete a descriptor that does not exist
 +      rc = cli.run(new String[]{"delete-descriptor", "bogus.json",
 +                                "--registry-client", "test_client",
 +                                "--master", "master"});
 +      assertEquals(0, rc);
 +    } finally {
 +      FileUtils.forceDelete(testRoot);
 +    }
 +  }
 +
 +  @Test
 +  public void testSuccessfulAliasLifecycle() throws Exception {
 +    outContent.reset();
 +    String[] args1 = {"create-alias", "alias1", "--value", "testvalue1", "--master", "master"};
 +    int rc = 0;
 +    KnoxCLI cli = new KnoxCLI();
 +    cli.setConf(new GatewayConfigImpl());
 +    rc = cli.run(args1);
 +    assertEquals(0, rc);
 +    assertTrue(outContent.toString(), outContent.toString().contains("alias1 has been successfully " +
 +        "created."));
 +
 +    outContent.reset();
 +    String[] args2 = {"list-alias", "--master", 
 +        "master"};
 +    rc = cli.run(args2);
 +    assertEquals(0, rc);
 +    assertTrue(outContent.toString(), outContent.toString().contains("alias1"));
 +
 +    outContent.reset();
 +    String[] args4 = {"delete-alias", "alias1", "--master", 
 +      "master"};
 +    rc = cli.run(args4);
 +    assertEquals(0, rc);
 +    assertTrue(outContent.toString(), outContent.toString().contains("alias1 has been successfully " +
 +        "deleted."));
 +
 +    outContent.reset();
 +    rc = cli.run(args2);
 +    assertEquals(0, rc);
 +    assertFalse(outContent.toString(), outContent.toString().contains("alias1"));
 +  }
 +  
 +  @Test
 +  public void testListAndDeleteOfAliasForInvalidClusterName() throws Exception {
 +    outContent.reset();
 +    String[] args1 =
 +        { "create-alias", "alias1", "--cluster", "cluster1", "--value", "testvalue1", "--master",
 +            "master" };
 +    int rc = 0;
 +    KnoxCLI cli = new KnoxCLI();
 +    cli.setConf(new GatewayConfigImpl());
 +    rc = cli.run(args1);
 +    assertEquals(0, rc);
 +    assertTrue(outContent.toString(), outContent.toString().contains(
 +      "alias1 has been successfully " + "created."));
 +
 +    outContent.reset();
 +    String[] args2 = { "list-alias", "--cluster", "Invalidcluster1", "--master", "master" };
 +    rc = cli.run(args2);
 +    assertEquals(0, rc);
 +    System.out.println(outContent.toString());
 +    assertTrue(outContent.toString(),
 +      outContent.toString().contains("Invalid cluster name provided: Invalidcluster1"));
 +
 +    outContent.reset();
 +    String[] args4 =
 +        { "delete-alias", "alias1", "--cluster", "Invalidcluster1", "--master", "master" };
 +    rc = cli.run(args4);
 +    assertEquals(0, rc);
 +    assertTrue(outContent.toString(),
 +      outContent.toString().contains("Invalid cluster name provided: Invalidcluster1"));
 +
 +  }
 +
 +  @Test
 +  public void testDeleteOfNonExistAliasFromUserDefinedCluster() throws Exception {
 +    KnoxCLI cli = new KnoxCLI();
 +    cli.setConf(new GatewayConfigImpl());
 +    try {
 +      int rc = 0;
 +      outContent.reset();
 +      String[] args1 =
 +          { "create-alias", "alias1", "--cluster", "cluster1", "--value", "testvalue1", "--master",
 +              "master" };
 +      cli.run(args1);
 +
 +      // Delete invalid alias from the cluster
 +      outContent.reset();
 +      String[] args2 = { "delete-alias", "alias2", "--cluster", "cluster1", "--master", "master" };
 +      rc = cli.run(args2);
 +      assertEquals(0, rc);
 +      assertTrue(outContent.toString().contains("No such alias exists in the cluster."));
 +    } finally {
 +      outContent.reset();
 +      String[] args1 = { "delete-alias", "alias1", "--cluster", "cluster1", "--master", "master" };
 +      cli.run(args1);
 +    }
 +  }
 +
 +  @Test
 +  public void testDeleteOfNonExistAliasFromDefaultCluster() throws Exception {
 +    KnoxCLI cli = new KnoxCLI();
 +    cli.setConf(new GatewayConfigImpl());
 +    try {
 +      int rc = 0;
 +      outContent.reset();
 +      String[] args1 = { "create-alias", "alias1", "--value", "testvalue1", "--master", "master" };
 +      cli.run(args1);
 +
 +      // Delete invalid alias from the cluster
 +      outContent.reset();
 +      String[] args2 = { "delete-alias", "alias2", "--master", "master" };
 +      rc = cli.run(args2);
 +      assertEquals(0, rc);
 +      assertTrue(outContent.toString().contains("No such alias exists in the cluster."));
 +    } finally {
 +      outContent.reset();
 +      String[] args1 = { "delete-alias", "alias1", "--master", "master" };
 +      cli.run(args1);
 +    }
 +  }
 +
 +  @Test
 +  public void testForInvalidArgument() throws Exception {
 +    outContent.reset();
 +    String[] args1 = { "--value", "testvalue1", "--master", "master" };
 +    KnoxCLI cli = new KnoxCLI();
 +    cli.setConf(new GatewayConfigImpl());
 +    int rc = cli.run(args1);
 +    assertEquals(-2, rc);
 +    assertTrue(outContent.toString().contains("ERROR: Invalid Command"));
 +  }
 +
 +  @Test
 +  public void testListAndDeleteOfAliasForValidClusterName() throws Exception {
 +    outContent.reset();
 +    String[] args1 =
 +        { "create-alias", "alias1", "--cluster", "cluster1", "--value", "testvalue1", "--master",
 +            "master" };
 +    int rc = 0;
 +    KnoxCLI cli = new KnoxCLI();
 +    cli.setConf(new GatewayConfigImpl());
 +    rc = cli.run(args1);
 +    assertEquals(0, rc);
 +    assertTrue(outContent.toString(), outContent.toString().contains(
 +      "alias1 has been successfully " + "created."));
 +
 +    outContent.reset();
 +    String[] args2 = { "list-alias", "--cluster", "cluster1", "--master", "master" };
 +    rc = cli.run(args2);
 +    assertEquals(0, rc);
 +    System.out.println(outContent.toString());
 +    assertTrue(outContent.toString(), outContent.toString().contains("alias1"));
 +
 +    outContent.reset();
 +    String[] args4 =
 +        { "delete-alias", "alias1", "--cluster", "cluster1", "--master", "master" };
 +    rc = cli.run(args4);
 +    assertEquals(0, rc);
 +    assertTrue(outContent.toString(), outContent.toString().contains(
 +      "alias1 has been successfully " + "deleted."));
 +
 +    outContent.reset();
 +    rc = cli.run(args2);
 +    assertEquals(0, rc);
 +    assertFalse(outContent.toString(), outContent.toString().contains("alias1"));
 +
 +  }
 +
 +  @Test
 +  public void testGatewayAndClusterStores() throws Exception {
 +    GatewayConfigImpl config = new GatewayConfigImpl();
 +    FileUtils.deleteQuietly( new File( config.getGatewaySecurityDir() ) );
 +
 +    outContent.reset();
 +    String[] gwCreateArgs = {"create-alias", "alias1", "--value", "testvalue1", "--master", "master"};
 +    int rc = 0;
 +    KnoxCLI cli = new KnoxCLI();
 +    cli.setConf( config );
 +    rc = cli.run(gwCreateArgs);
 +    assertEquals(0, rc);
 +    assertTrue(outContent.toString(), outContent.toString().contains("alias1 has been successfully " +
 +        "created."));
 +
 +    AliasService as = cli.getGatewayServices().getService(GatewayServices.ALIAS_SERVICE);
 +
 +    outContent.reset();
 +    String[] clusterCreateArgs = {"create-alias", "alias2", "--value", "testvalue1", "--cluster", "test", 
 +        "--master", "master"};
 +    cli = new KnoxCLI();
 +    cli.setConf( config );
 +    rc = cli.run(clusterCreateArgs);
 +    assertEquals(0, rc);
 +    assertTrue(outContent.toString(), outContent.toString().contains("alias2 has been successfully " +
 +        "created."));
 +
 +    outContent.reset();
 +    String[] args2 = {"list-alias", "--master", "master"};
 +    cli = new KnoxCLI();
 +    rc = cli.run(args2);
 +    assertEquals(0, rc);
 +    assertFalse(outContent.toString(), outContent.toString().contains("alias2"));
 +    assertTrue(outContent.toString(), outContent.toString().contains("alias1"));
 +
 +    char[] passwordChars = as.getPasswordFromAliasForCluster("test", "alias2");
 +    assertNotNull(passwordChars);
 +    assertTrue(new String(passwordChars), "testvalue1".equals(new String(passwordChars)));
 +
 +    outContent.reset();
 +    String[] args1 = {"list-alias", "--cluster", "test", "--master", "master"};
 +    cli = new KnoxCLI();
 +    rc = cli.run(args1);
 +    assertEquals(0, rc);
 +    assertFalse(outContent.toString(), outContent.toString().contains("alias1"));
 +    assertTrue(outContent.toString(), outContent.toString().contains("alias2"));
 +
 +    outContent.reset();
 +    String[] args4 = {"delete-alias", "alias1", "--master", "master"};
 +    cli = new KnoxCLI();
 +    rc = cli.run(args4);
 +    assertEquals(0, rc);
 +    assertTrue(outContent.toString(), outContent.toString().contains("alias1 has been successfully " +
 +        "deleted."));
 +    
 +    outContent.reset();
 +    String[] args5 = {"delete-alias", "alias2", "--cluster", "test", "--master", "master"};
 +    cli = new KnoxCLI();
 +    rc = cli.run(args5);
 +    assertEquals(0, rc);
 +    assertTrue(outContent.toString(), outContent.toString().contains("alias2 has been successfully " +
 +        "deleted."));
 +  }
 +
 +  private void createTestMaster() throws Exception {
 +    outContent.reset();
 +    String[] args = new String[]{ "create-master", "--master", "master", "--force" };
 +    KnoxCLI cli = new KnoxCLI();
 +    int rc = cli.run(args);
 +    assertThat( rc, is( 0 ) );
 +    MasterService ms = cli.getGatewayServices().getService("MasterService");
 +    String master = String.copyValueOf( ms.getMasterSecret() );
 +    assertThat( master, is( "master" ) );
 +    assertThat( outContent.toString(), containsString( "Master secret has been persisted to disk." ) );
 +  }
 +
 +  @Test
 +  public void testCreateSelfSignedCert() throws Exception {
 +    GatewayConfigImpl config = new GatewayConfigImpl();
 +    FileUtils.deleteQuietly( new File( config.getGatewaySecurityDir() ) );
 +    createTestMaster();
 +    outContent.reset();
 +    KnoxCLI cli = new KnoxCLI();
 +    cli.setConf( config );
 +    String[] gwCreateArgs = {"create-cert", "--hostname", "hostname1", "--master", "master"};
 +    int rc = 0;
 +    rc = cli.run(gwCreateArgs);
 +    assertEquals(0, rc);
 +    assertTrue(outContent.toString(), outContent.toString().contains("gateway-identity has been successfully " +
 +        "created."));
 +  }
 +
 +  @Test
 +  public void testExportCert() throws Exception {
 +    GatewayConfigImpl config = new GatewayConfigImpl();
 +    FileUtils.deleteQuietly( new File( config.getGatewaySecurityDir() ) );
 +    createTestMaster();
 +    outContent.reset();
 +    KnoxCLI cli = new KnoxCLI();
 +    cli.setConf( config );
 +    String[] gwCreateArgs = {"create-cert", "--hostname", "hostname1", "--master", "master"};
 +    int rc = 0;
 +    rc = cli.run(gwCreateArgs);
 +    assertEquals(0, rc);
 +    assertTrue(outContent.toString(), outContent.toString().contains("gateway-identity has been successfully " +
 +        "created."));
 +
 +    outContent.reset();
 +    String[] gwCreateArgs2 = {"export-cert", "--type", "PEM"};
 +    rc = 0;
 +    rc = cli.run(gwCreateArgs2);
 +    assertEquals(0, rc);
 +    assertTrue(outContent.toString(), outContent.toString().contains("Certificate gateway-identity has been successfully exported to"));
 +    assertTrue(outContent.toString(), outContent.toString().contains("gateway-identity.pem"));
 +
 +    outContent.reset();
 +    String[] gwCreateArgs2_5 = {"export-cert"};
 +    rc = 0;
 +    rc = cli.run(gwCreateArgs2_5);
 +    assertEquals(0, rc);
 +    assertTrue(outContent.toString(), outContent.toString().contains("Certificate gateway-identity has been successfully exported to"));
 +    assertTrue(outContent.toString(), outContent.toString().contains("gateway-identity.pem"));
 +
 +    outContent.reset();
 +    String[] gwCreateArgs3 = {"export-cert", "--type", "JKS"};
 +    rc = 0;
 +    rc = cli.run(gwCreateArgs3);
 +    assertEquals(0, rc);
 +    assertTrue(outContent.toString(), outContent.toString().contains("Certificate gateway-identity has been successfully exported to"));
 +    assertTrue(outContent.toString(), outContent.toString().contains("gateway-client-trust.jks"));
 +
 +    outContent.reset();
 +    String[] gwCreateArgs4 = {"export-cert", "--type", "invalid"};
 +    rc = 0;
 +    rc = cli.run(gwCreateArgs4);
 +    assertEquals(0, rc);
 +    assertTrue(outContent.toString(), outContent.toString().contains("Invalid type for export file provided."));
 +  }
 +
 +  @Test
 +  public void testCreateMaster() throws Exception {
 +    GatewayConfigImpl config = new GatewayConfigImpl();
 +    FileUtils.deleteQuietly( new File( config.getGatewaySecurityDir() ) );
 +    outContent.reset();
 +    String[] args = {"create-master", "--master", "master"};
 +    int rc = 0;
 +    KnoxCLI cli = new KnoxCLI();
 +    cli.setConf( config );
 +    rc = cli.run(args);
 +    assertEquals(0, rc);
 +    MasterService ms = cli.getGatewayServices().getService("MasterService");
 +    // assertTrue(ms.getClass().getName(), ms.getClass().getName().equals("kjdfhgjkhfdgjkh"));
 +    assertTrue( new String( ms.getMasterSecret() ), "master".equals( new String( ms.getMasterSecret() ) ) );
 +    assertTrue(outContent.toString(), outContent.toString().contains("Master secret has been persisted to disk."));
 +  }
 +
 +  @Test
 +  public void testCreateMasterGenerate() throws Exception {
 +    String[] args = {"create-master", "--generate" };
 +    int rc = 0;
 +    GatewayConfigImpl config = new GatewayConfigImpl();
 +    File masterFile = new File( config.getGatewaySecurityDir(), "master" );
 +
 +    // Need to delete the master file so that the change isn't ignored.
 +    if( masterFile.exists() ) {
 +      assertThat( "Failed to delete existing master file.", masterFile.delete(), is( true ) );
 +    }
 +    outContent.reset();
 +    KnoxCLI cli = new KnoxCLI();
 +    cli.setConf(config);
 +    rc = cli.run(args);
 +    assertThat( rc, is( 0 ) );
 +    MasterService ms = cli.getGatewayServices().getService("MasterService");
 +    String master = String.copyValueOf( ms.getMasterSecret() );
 +    assertThat( master.length(), is( 36 ) );
 +    assertThat( master.indexOf( '-' ), is( 8 ) );
 +    assertThat( master.indexOf( '-', 9 ), is( 13 ) );
 +    assertThat( master.indexOf( '-', 14 ), is( 18 ) );
 +    assertThat( master.indexOf( '-', 19 ), is( 23 ) );
 +    assertThat( UUID.fromString( master ), notNullValue() );
 +    assertThat( outContent.toString(), containsString( "Master secret has been persisted to disk." ) );
 +
 +    // Need to delete the master file so that the change isn't ignored.
 +    if( masterFile.exists() ) {
 +      assertThat( "Failed to delete existing master file.", masterFile.delete(), is( true ) );
 +    }
 +    outContent.reset();
 +    cli = new KnoxCLI();
 +    rc = cli.run(args);
 +    ms = cli.getGatewayServices().getService("MasterService");
 +    String master2 = String.copyValueOf( ms.getMasterSecret() );
 +    assertThat( master2.length(), is( 36 ) );
 +    assertThat( UUID.fromString( master2 ), notNullValue() );
 +    assertThat( master2, not( is( master ) ) );
 +    assertThat( rc, is( 0 ) );
 +    assertThat(outContent.toString(), containsString("Master secret has been persisted to disk."));
 +  }
 +
 +  @Test
 +  public void testCreateMasterForce() throws Exception {
 +    GatewayConfigImpl config = new GatewayConfigImpl();
 +    File masterFile = new File( config.getGatewaySecurityDir(), "master" );
 +
 +    // Need to delete the master file so that the change isn't ignored.
 +    if( masterFile.exists() ) {
 +      assertThat( "Failed to delete existing master file.", masterFile.delete(), is( true ) );
 +    }
 +
 +    KnoxCLI cli = new KnoxCLI();
 +    cli.setConf(config);
 +    MasterService ms;
 +    int rc = 0;
 +    outContent.reset();
 +
 +    String[] args = { "create-master", "--master", "test-master-1" };
 +
 +    rc = cli.run(args);
 +    assertThat( rc, is( 0 ) );
 +    ms = cli.getGatewayServices().getService("MasterService");
 +    String master = String.copyValueOf( ms.getMasterSecret() );
 +    assertThat( master, is( "test-master-1" ) );
 +    assertThat( outContent.toString(), containsString( "Master secret has been persisted to disk." ) );
 +
 +    outContent.reset();
 +    rc = cli.run(args);
 +    assertThat( rc, is(0 ) );
 +    assertThat( outContent.toString(), containsString( "Master secret is already present on disk." ) );
 +
 +    outContent.reset();
 +    args = new String[]{ "create-master", "--master", "test-master-2", "--force" };
 +    rc = cli.run(args);
 +    assertThat( rc, is( 0 ) );
 +    ms = cli.getGatewayServices().getService("MasterService");
 +    master = String.copyValueOf( ms.getMasterSecret() );
 +    assertThat( master, is( "test-master-2" ) );
 +    assertThat( outContent.toString(), containsString( "Master secret has been persisted to disk." ) );
 +  }
 +
 +  @Test
 +  public void testListTopology() throws Exception {
 +
 +    GatewayConfigMock config = new GatewayConfigMock();
 +    URL topoURL = ClassLoader.getSystemResource("conf-demo/conf/topologies/admin.xml");
 +    config.setConfDir( new File(topoURL.getFile()).getParentFile().getParent() );
 +    String args[] = {"list-topologies", "--master", "knox"};
 +
 +    KnoxCLI cli = new KnoxCLI();
 +    cli.setConf( config );
 +
 +    cli.run( args );
 +    assertThat(outContent.toString(), containsString("sandbox"));
 +    assertThat(outContent.toString(), containsString("admin"));
 +  }
 +
 +  private class GatewayConfigMock extends GatewayConfigImpl{
 +    private String confDir;
 +    public void setConfDir(String location) {
 +      confDir = location;
 +    }
 +
 +    @Override
 +    public String getGatewayConfDir(){
 +      return confDir;
 +    }
 +  }
 +
 +  private static XMLTag createBadTopology() {
 +    XMLTag xml = XMLDoc.newDocument(true)
 +        .addRoot( "topology" )
 +        .addTag( "gateway" )
 +
 +        .addTag( "provider" )
 +        .addTag( "role" ).addText( "authentication" )
 +        .addTag( "name" ).addText( "ShiroProvider" )
 +        .addTag( "enabled" ).addText( "123" )
 +        .addTag( "param" )
 +        .addTag( "name" ).addText( "" )
 +        .addTag( "value" ).addText( "org.apache.knox.gateway.shirorealm.KnoxLdapRealm" ).gotoParent()
 +        .addTag( "param" )
 +        .addTag( "name" ).addText( "main.ldapRealm.userDnTemplate" )
 +        .addTag( "value" ).addText( "uid={0},ou=people,dc=hadoop,dc=apache,dc=org" ).gotoParent()
 +        .addTag( "param" )
 +        .addTag( "name" ).addText( "main.ldapRealm.contextFactory.url" )
 +        .addTag( "value" ).addText( "ldap://localhost:8443" ).gotoParent()
 +        .addTag( "param" )
 +        .addTag( "name" ).addText( "main.ldapRealm.contextFactory.authenticationMechanism" )
 +        .addTag( "value" ).addText( "simple" ).gotoParent()
 +        .addTag( "param" )
 +        .addTag( "name" ).addText( "urls./**" )
 +        .addTag( "value" ).addText( "authcBasic" ).gotoParent().gotoParent()
 +        .addTag( "provider" )
 +        .addTag( "role" ).addText( "identity-assertion" )
 +        .addTag( "enabled" ).addText( "vvv" )
 +        .addTag( "name" ).addText( "Default" ).gotoParent()
 +        .addTag( "provider" )
 +        .gotoRoot()
 +        .addTag( "service" )
 +        .addTag( "role" ).addText( "test-service-role" )
 +        .gotoRoot();
 +    return xml;
 +  }
 +
 +  private static XMLTag createGoodTopology() {
 +    XMLTag xml = XMLDoc.newDocument( true )
 +        .addRoot( "topology" )
 +        .addTag( "gateway" )
 +
 +        .addTag( "provider" )
 +        .addTag( "role" ).addText( "authentication" )
 +        .addTag( "name" ).addText( "ShiroProvider" )
 +        .addTag( "enabled" ).addText( "true" )
 +        .addTag( "param" )
 +        .addTag( "name" ).addText( "main.ldapRealm" )
 +        .addTag( "value" ).addText( "org.apache.knox.gateway.shirorealm.KnoxLdapRealm" ).gotoParent()
 +        .addTag( "param" )
 +        .addTag( "name" ).addText( "main.ldapRealm.userDnTemplate" )
 +        .addTag( "value" ).addText( "uid={0},ou=people,dc=hadoop,dc=apache,dc=org" ).gotoParent()
 +        .addTag( "param" )
 +        .addTag( "name" ).addText( "main.ldapRealm.contextFactory.url" )
 +        .addTag( "value" ).addText( "ldap://localhost:8443").gotoParent()
 +        .addTag( "param" )
 +        .addTag( "name" ).addText( "main.ldapRealm.contextFactory.authenticationMechanism" )
 +        .addTag( "value" ).addText( "simple" ).gotoParent()
 +        .addTag( "param" )
 +        .addTag( "name" ).addText( "urls./**" )
 +        .addTag( "value" ).addText( "authcBasic" ).gotoParent().gotoParent()
 +        .addTag( "provider" )
 +        .addTag( "role" ).addText( "identity-assertion" )
 +        .addTag( "enabled" ).addText( "true" )
 +        .addTag( "name" ).addText( "Default" ).gotoParent()
 +        .addTag( "provider" )
 +        .gotoRoot()
 +        .addTag( "service" )
 +        .addTag( "role" ).addText( "test-service-role" )
 +        .gotoRoot();
 +    return xml;
 +  }
 +
 +  private File writeTestTopology( String name, XMLTag xml ) throws IOException {
 +    // Create the test topology.
 +
 +    GatewayConfigMock config = new GatewayConfigMock();
 +    URL topoURL = ClassLoader.getSystemResource("conf-demo/conf/topologies/admin.xml");
 +    config.setConfDir( new File(topoURL.getFile()).getParentFile().getParent() );
 +
 +    File tempFile = new File( config.getGatewayTopologyDir(), name + ".xml." + UUID.randomUUID() );
 +    FileOutputStream stream = new FileOutputStream( tempFile );
 +    xml.toStream( stream );
 +    stream.close();
 +    File descriptor = new File( config.getGatewayTopologyDir(), name + ".xml" );
 +    tempFile.renameTo( descriptor );
 +    return descriptor;
 +  }
 +
 +  @Test
 +  public void testValidateTopology() throws Exception {
 +
 +    GatewayConfigMock config = new GatewayConfigMock();
 +    URL topoURL = ClassLoader.getSystemResource("conf-demo/conf/topologies/admin.xml");
 +    config.setConfDir( new File(topoURL.getFile()).getParentFile().getParent() );
 +    String args[] = {"validate-topology", "--master", "knox", "--cluster", "sandbox"};
 +
 +    KnoxCLI cli = new KnoxCLI();
 +    cli.setConf( config );
 +    cli.run( args );
 +
 +    assertThat(outContent.toString(), containsString(config.getGatewayTopologyDir()));
 +    assertThat(outContent.toString(), containsString("sandbox"));
 +    assertThat(outContent.toString(), containsString("success"));
 +    outContent.reset();
 +
 +
 +    String args2[] = {"validate-topology", "--master", "knox", "--cluster", "NotATopology"};
 +    cli.run(args2);
 +
 +    assertThat(outContent.toString(), containsString("NotATopology"));
 +    assertThat(outContent.toString(), containsString("does not exist"));
 +    outContent.reset();
 +
 +    String args3[] = {"validate-topology", "--master", "knox", "--path", config.getGatewayTopologyDir() + "/admin.xml"};
 +    cli.run(args3);
 +
 +    assertThat(outContent.toString(), containsString("admin"));
 +    assertThat(outContent.toString(), containsString("success"));
 +    outContent.reset();
 +
 +    String args4[] = {"validate-topology", "--master", "knox", "--path", "not/a/path"};
 +    cli.run(args4);
 +    assertThat(outContent.toString(), containsString("does not exist"));
 +    assertThat(outContent.toString(), containsString("not/a/path"));
 +  }
 +
 +  @Test
 +  public void testValidateTopologyOutput() throws Exception {
 +
 +    File bad = writeTestTopology( "test-cluster-bad", createBadTopology() );
 +    File good = writeTestTopology( "test-cluster-good", createGoodTopology() );
 +
 +    GatewayConfigMock config = new GatewayConfigMock();
 +    URL topoURL = ClassLoader.getSystemResource("conf-demo/conf/topologies/admin.xml");
 +    config.setConfDir( new File(topoURL.getFile()).getParentFile().getParent() );
 +    String args[] = {"validate-topology", "--master", "knox", "--cluster", "test-cluster-bad"};
 +
 +    KnoxCLI cli = new KnoxCLI();
 +    cli.setConf( config );
 +    cli.run( args );
 +
 +    assertThat(outContent.toString(), containsString(config.getGatewayTopologyDir()));
 +    assertThat(outContent.toString(), containsString("test-cluster-bad"));
 +    assertThat(outContent.toString(), containsString("unsuccessful"));
 +    assertThat(outContent.toString(), containsString("Invalid content"));
 +    assertThat(outContent.toString(), containsString("Line"));
 +
 +
 +    outContent.reset();
 +
 +    String args2[] = {"validate-topology", "--master", "knox", "--cluster", "test-cluster-good"};
 +
 +    cli.run(args2);
 +
 +    assertThat(outContent.toString(), containsString(config.getGatewayTopologyDir()));
 +    assertThat(outContent.toString(), containsString("success"));
 +    assertThat(outContent.toString(), containsString("test-cluster-good"));
 +
 +
 +  }
 +
 +  private static final String testDescriptorContentJSON = "{\n" +
 +                                                          "  \"discovery-address\":\"http://localhost:8080\",\n" +
 +                                                          "  \"discovery-user\":\"maria_dev\",\n" +
 +                                                          "  \"discovery-pwd-alias\":\"sandbox.discovery.password\",\n" +
 +                                                          "  \"provider-config-ref\":\"my-provider-config\",\n" +
 +                                                          "  \"cluster\":\"Sandbox\",\n" +
 +                                                          "  \"services\":[\n" +
 +                                                          "    {\"name\":\"NAMENODE\"},\n" +
 +                                                          "    {\"name\":\"JOBTRACKER\"},\n" +
 +                                                          "    {\"name\":\"WEBHDFS\"},\n" +
 +                                                          "    {\"name\":\"WEBHCAT\"},\n" +
 +                                                          "    {\"name\":\"OOZIE\"},\n" +
 +                                                          "    {\"name\":\"WEBHBASE\"},\n" +
 +                                                          "    {\"name\":\"HIVE\"},\n" +
 +                                                          "    {\"name\":\"RESOURCEMANAGER\"}\n" +
 +                                                          "  ]\n" +
 +                                                          "}";
 +}

http://git-wip-us.apache.org/repos/asf/knox/blob/e5fd0622/pom.xml
----------------------------------------------------------------------


[07/16] knox git commit: Updating NOTICE year

Posted by mo...@apache.org.
Updating NOTICE year


Project: http://git-wip-us.apache.org/repos/asf/knox/repo
Commit: http://git-wip-us.apache.org/repos/asf/knox/commit/e0adfbd0
Tree: http://git-wip-us.apache.org/repos/asf/knox/tree/e0adfbd0
Diff: http://git-wip-us.apache.org/repos/asf/knox/diff/e0adfbd0

Branch: refs/heads/KNOX-998-Package_Restructuring
Commit: e0adfbd0768e839afe5b8988d4c5ad6ca93a4656
Parents: 348c4d7
Author: Colm O hEigeartaigh <co...@apache.org>
Authored: Thu Jan 4 10:48:40 2018 +0000
Committer: Colm O hEigeartaigh <co...@apache.org>
Committed: Thu Jan 4 10:48:40 2018 +0000

----------------------------------------------------------------------
 NOTICE | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/knox/blob/e0adfbd0/NOTICE
----------------------------------------------------------------------
diff --git a/NOTICE b/NOTICE
index b4b301c..0532d26 100644
--- a/NOTICE
+++ b/NOTICE
@@ -1,5 +1,5 @@
 Apache Knox
-Copyright 2012-2017 The Apache Software Foundation
+Copyright 2012-2018 The Apache Software Foundation
 
 This product includes software developed by
-The Apache Software Foundation (http://www.apache.org/).
\ No newline at end of file
+The Apache Software Foundation (http://www.apache.org/).


[09/16] knox git commit: KNOX-1161 - Update hadoop dependencies to hadoop 3

Posted by mo...@apache.org.
KNOX-1161 - Update hadoop dependencies to hadoop 3


Project: http://git-wip-us.apache.org/repos/asf/knox/repo
Commit: http://git-wip-us.apache.org/repos/asf/knox/commit/772bc33d
Tree: http://git-wip-us.apache.org/repos/asf/knox/tree/772bc33d
Diff: http://git-wip-us.apache.org/repos/asf/knox/diff/772bc33d

Branch: refs/heads/KNOX-998-Package_Restructuring
Commit: 772bc33d48dd37be5cd098992df3e50db24326f3
Parents: 6d4756f
Author: Sandeep More <mo...@apache.org>
Authored: Fri Jan 5 14:38:24 2018 -0500
Committer: Sandeep More <mo...@apache.org>
Committed: Fri Jan 5 14:38:24 2018 -0500

----------------------------------------------------------------------
 .../filter/PortMappingHelperHandler.java        |  2 +-
 gateway-test-release/pom.xml                    | 77 ++++++++++++++++++++
 pom.xml                                         | 31 +++++++-
 3 files changed, 107 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/knox/blob/772bc33d/gateway-server/src/main/java/org/apache/hadoop/gateway/filter/PortMappingHelperHandler.java
----------------------------------------------------------------------
diff --git a/gateway-server/src/main/java/org/apache/hadoop/gateway/filter/PortMappingHelperHandler.java b/gateway-server/src/main/java/org/apache/hadoop/gateway/filter/PortMappingHelperHandler.java
index ea3efc4..06d9668 100644
--- a/gateway-server/src/main/java/org/apache/hadoop/gateway/filter/PortMappingHelperHandler.java
+++ b/gateway-server/src/main/java/org/apache/hadoop/gateway/filter/PortMappingHelperHandler.java
@@ -96,7 +96,7 @@ public class PortMappingHelperHandler extends HandlerWrapper {
       throws IOException, ServletException {
 
     String newTarget = target;
-    String baseURI = baseRequest.getUri().toString();
+    String baseURI = baseRequest.getRequestURI();
 
     // If Port Mapping feature enabled
     if (config.isGatewayPortMappingEnabled()) {

http://git-wip-us.apache.org/repos/asf/knox/blob/772bc33d/gateway-test-release/pom.xml
----------------------------------------------------------------------
diff --git a/gateway-test-release/pom.xml b/gateway-test-release/pom.xml
index e61e0c8..48def02 100644
--- a/gateway-test-release/pom.xml
+++ b/gateway-test-release/pom.xml
@@ -34,7 +34,62 @@
         <module>webhdfs-test</module>
     </modules>
 
+    <properties>
+        <jetty.version>9.3.19.v20170502</jetty.version>
+        <mockito.version>1.8.4</mockito.version>
+        <jackson2.version>2.7.8</jackson2.version>
+    </properties>
+
+
     <dependencies>
+        <!-- Hadoop 3.0 Deps. -->
+        <dependency>
+            <groupId>com.fasterxml.jackson.core</groupId>
+            <artifactId>jackson-databind</artifactId>
+            <version>${jackson2.version}</version>
+        </dependency>
+
+        <dependency>
+            <groupId>org.mockito</groupId>
+            <artifactId>mockito-all</artifactId>
+            <version>${mockito.version}</version>
+            <scope>test</scope>
+        </dependency>
+
+        <dependency>
+            <groupId>org.eclipse.jetty</groupId>
+            <artifactId>jetty-server</artifactId>
+            <version>${jetty.version}</version>
+            <exclusions>
+                <exclusion>
+                    <groupId>org.eclipse.jetty</groupId>
+                    <artifactId>javax.servlet-api</artifactId>
+                </exclusion>
+            </exclusions>
+        </dependency>
+        <dependency>
+            <groupId>org.eclipse.jetty</groupId>
+            <artifactId>jetty-util</artifactId>
+            <version>${jetty.version}</version>
+        </dependency>
+        <dependency>
+            <groupId>org.eclipse.jetty</groupId>
+            <artifactId>jetty-servlet</artifactId>
+            <version>${jetty.version}</version>
+        </dependency>
+        <dependency>
+            <groupId>org.eclipse.jetty</groupId>
+            <artifactId>jetty-webapp</artifactId>
+            <version>${jetty.version}</version>
+        </dependency>
+        <dependency>
+            <groupId>org.eclipse.jetty</groupId>
+            <artifactId>jetty-util-ajax</artifactId>
+            <version>${jetty.version}</version>
+        </dependency>
+
+
+
         <dependency>
             <groupId>javax.servlet</groupId>
             <artifactId>javax.servlet-api</artifactId>
@@ -118,10 +173,27 @@
                     <groupId>org.apache.directory.server</groupId>
                     <artifactId>apacheds-all</artifactId>
                 </exclusion>
+
+                <exclusion>
+                    <groupId>commons-configuration</groupId>
+                    <artifactId>commons-configuration</artifactId>
+                </exclusion>
+
+                <exclusion>
+                    <groupId>com.fasterxml.jackson.core</groupId>
+                    <artifactId>jackson-databind</artifactId>
+                </exclusion>
+
             </exclusions>
         </dependency>
 
         <dependency>
+            <groupId>commons-configuration</groupId>
+            <artifactId>commons-configuration</artifactId>
+            <version>1.10</version>
+        </dependency>
+
+        <dependency>
             <groupId>org.hamcrest</groupId>
             <artifactId>hamcrest-library</artifactId>
             <scope>test</scope>
@@ -152,6 +224,11 @@
         </dependency>
 
         <dependency>
+            <groupId>org.apache.kerby</groupId>
+            <artifactId>kerb-simplekdc</artifactId>
+        </dependency>
+
+        <dependency>
             <groupId>junit</groupId>
             <artifactId>junit</artifactId>
             <scope>test</scope>

http://git-wip-us.apache.org/repos/asf/knox/blob/772bc33d/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index fd7f62b..6bd9396 100644
--- a/pom.xml
+++ b/pom.xml
@@ -112,7 +112,7 @@
         <gateway-version>1.0.0-SNAPSHOT</gateway-version>
         <gateway-group>org.apache.knox</gateway-group>
         <groovy-version>2.4.6</groovy-version>
-        <hadoop-version>2.7.3</hadoop-version>
+        <hadoop-version>3.0.0</hadoop-version>
         <jackson.version>2.8.10</jackson.version>
         <jetty-version>9.2.15.v20160210</jetty-version>
         <surefire-version>2.16</surefire-version>
@@ -121,6 +121,9 @@
         <javax-websocket-version>1.1</javax-websocket-version>
         <metrics-version>3.1.2</metrics-version>
         <shiro.version>1.2.6</shiro.version>
+        <kerb-simplekdc-version>1.0.0-RC2</kerb-simplekdc-version>
+        <commons-beanutils-version>1.9.3</commons-beanutils-version>
+        <jackson2.version>2.2.2</jackson2.version>
     </properties>
 
     <licenses>
@@ -1015,10 +1018,34 @@
                         <groupId>xmlenc</groupId>
                         <artifactId>xmlenc</artifactId>
                     </exclusion>
+
+                    <exclusion>
+                        <groupId>com.sun.jersey</groupId>
+                        <artifactId>jersey-servlet</artifactId>
+                    </exclusion>
+
+                    <exclusion>
+                        <groupId>org.apache.kerby</groupId>
+                        <artifactId>kerb-simplekdc</artifactId>
+                    </exclusion>
+
+                    <!--
+                    <exclusion>
+                        <groupId>com.fasterxml.jackson.core</groupId>
+                        <artifactId>jackson-databind</artifactId>
+                    </exclusion>
+                    -->
+
                 </exclusions>
             </dependency>
 
             <dependency>
+                <groupId>org.apache.kerby</groupId>
+                <artifactId>kerb-simplekdc</artifactId>
+                <version>${kerb-simplekdc-version}</version>
+            </dependency>
+
+            <dependency>
                 <groupId>com.fasterxml.jackson.core</groupId>
                 <artifactId>jackson-databind</artifactId>
                 <version>${jackson.version}</version>
@@ -1057,7 +1084,7 @@
             <dependency>
                 <groupId>commons-beanutils</groupId>
                 <artifactId>commons-beanutils</artifactId>
-                <version>1.9.2</version>
+                <version>${commons-beanutils-version}</version>
             </dependency>
             <dependency>
                 <groupId>org.apache.commons</groupId>


[04/16] knox git commit: KNOX-1137

Posted by mo...@apache.org.
KNOX-1137


Project: http://git-wip-us.apache.org/repos/asf/knox/repo
Commit: http://git-wip-us.apache.org/repos/asf/knox/commit/7e03a9cf
Tree: http://git-wip-us.apache.org/repos/asf/knox/tree/7e03a9cf
Diff: http://git-wip-us.apache.org/repos/asf/knox/diff/7e03a9cf

Branch: refs/heads/KNOX-998-Package_Restructuring
Commit: 7e03a9cf1714747df01c4fa4afc00615feafa8c5
Parents: 7d42ffd
Author: Phil Zampino <pz...@gmail.com>
Authored: Tue Dec 12 12:41:54 2017 -0500
Committer: Phil Zampino <pz...@apache.org>
Committed: Wed Jan 3 12:59:37 2018 -0500

----------------------------------------------------------------------
 .../org/apache/hadoop/gateway/util/KnoxCLI.java | 153 ++++++++++++-------
 .../apache/hadoop/gateway/util/KnoxCLITest.java |  16 ++
 2 files changed, 118 insertions(+), 51 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/knox/blob/7e03a9cf/gateway-server/src/main/java/org/apache/hadoop/gateway/util/KnoxCLI.java
----------------------------------------------------------------------
diff --git a/gateway-server/src/main/java/org/apache/hadoop/gateway/util/KnoxCLI.java b/gateway-server/src/main/java/org/apache/hadoop/gateway/util/KnoxCLI.java
index 5576df7..ec4b810 100644
--- a/gateway-server/src/main/java/org/apache/hadoop/gateway/util/KnoxCLI.java
+++ b/gateway-server/src/main/java/org/apache/hadoop/gateway/util/KnoxCLI.java
@@ -105,7 +105,9 @@ public class KnoxCLI extends Configured implements Tool {
       "   [" + LDAPSysBindCommand.USAGE + "]\n" +
       "   [" + ServiceTestCommand.USAGE + "]\n" +
       "   [" + RemoteRegistryClientsListCommand.USAGE + "]\n" +
+      "   [" + RemoteRegistryListProviderConfigsCommand.USAGE + "]\n" +
       "   [" + RemoteRegistryUploadProviderConfigCommand.USAGE + "]\n" +
+      "   [" + RemoteRegistryListDescriptorsCommand.USAGE + "]\n" +
       "   [" + RemoteRegistryUploadDescriptorCommand.USAGE + "]\n" +
       "   [" + RemoteRegistryDeleteProviderConfigCommand.USAGE + "]\n" +
       "   [" + RemoteRegistryDeleteDescriptorCommand.USAGE + "]\n" +
@@ -199,7 +201,9 @@ public class KnoxCLI extends Configured implements Tool {
    * % knoxcli service-test [--u user] [--p password] [--cluster clustername] [--hostname name] [--port port]
    * % knoxcli list-registry-clients
    * % knoxcli get-registry-acl entryName --registry-client name
+   * % knoxcli list-provider-configs --registry-client
    * % knoxcli upload-provider-config filePath --registry-client name [--entry-name entryName]
+   * % knoxcli list-descriptors --registry-client
    * % knoxcli upload-descriptor filePath --registry-client name [--entry-name entryName]
    * % knoxcli delete-provider-config providerConfig --registry-client name
    * % knoxcli delete-descriptor descriptor --registry-client name
@@ -371,6 +375,10 @@ public class KnoxCLI extends Configured implements Tool {
           return -1;
         }
         this.remoteRegistryClient = args[++i];
+      } else if (args[i].equalsIgnoreCase("list-provider-configs")) {
+        command = new RemoteRegistryListProviderConfigsCommand();
+      } else if (args[i].equalsIgnoreCase("list-descriptors")) {
+        command = new RemoteRegistryListDescriptorsCommand();
       } else if (args[i].equalsIgnoreCase("upload-provider-config")) {
         String fileName;
         if (i <= (args.length - 1)) {
@@ -484,6 +492,12 @@ public class KnoxCLI extends Configured implements Tool {
       out.println(RemoteRegistryGetACLCommand.USAGE + "\n\n" + RemoteRegistryGetACLCommand.DESC);
       out.println();
       out.println( div );
+      out.println(RemoteRegistryListProviderConfigsCommand.USAGE + "\n\n" + RemoteRegistryListProviderConfigsCommand.DESC);
+      out.println();
+      out.println( div );
+      out.println(RemoteRegistryListDescriptorsCommand.USAGE + "\n\n" + RemoteRegistryListDescriptorsCommand.DESC);
+      out.println();
+      out.println( div );
       out.println(RemoteRegistryUploadProviderConfigCommand.USAGE + "\n\n" + RemoteRegistryUploadProviderConfigCommand.DESC);
       out.println();
       out.println( div );
@@ -1878,16 +1892,80 @@ public class KnoxCLI extends Configured implements Tool {
     }
  }
 
+  private abstract class RemoteRegistryCommand extends Command {
+    static final String ROOT_ENTRY = "/knox";
+    static final String CONFIG_ENTRY = ROOT_ENTRY + "/config";
+    static final String PROVIDER_CONFIG_ENTRY = CONFIG_ENTRY + "/shared-providers";
+    static final String DESCRIPTORS_ENTRY = CONFIG_ENTRY + "/descriptors";
+
+    protected RemoteConfigurationRegistryClient getClient() {
+      RemoteConfigurationRegistryClient client = null;
+      if (remoteRegistryClient != null) {
+        RemoteConfigurationRegistryClientService cs = getRemoteConfigRegistryClientService();
+        client = cs.get(remoteRegistryClient);
+        if (client == null) {
+          out.println("No remote configuration registry identified by '" + remoteRegistryClient + "' could be found.");
+        }
+      } else {
+        out.println("Missing required argument : --registry-client\n");
+      }
+      return client;
+    }
+  }
+
+
+  public class RemoteRegistryListProviderConfigsCommand extends RemoteRegistryCommand {
+    static final String USAGE = "list-provider-configs --registry-client name";
+    static final String DESC = "Lists the provider configurations present in the specified remote registry\n";
+
+    @Override
+    public void execute() {
+      RemoteConfigurationRegistryClient client = getClient();
+      if (client != null) {
+        out.println("Provider Configurations (@" + client.getAddress() + ")");
+        List<String> entries = client.listChildEntries(PROVIDER_CONFIG_ENTRY);
+        for (String entry : entries) {
+          out.println(entry);
+        }
+        out.println();
+      }
+    }
+
+    @Override
+    public String getUsage() {
+      return USAGE + ":\n\n" + DESC;
+    }
+  }
+
+
+  public class RemoteRegistryListDescriptorsCommand extends RemoteRegistryCommand {
+    static final String USAGE = "list-descriptors --registry-client name";
+    static final String DESC = "Lists the descriptors present in the specified remote registry\n";
+
+    @Override
+    public void execute() {
+      RemoteConfigurationRegistryClient client = getClient();
+      if (client != null) {
+        out.println("Descriptors (@" + client.getAddress() + ")");
+        List<String> entries = client.listChildEntries(DESCRIPTORS_ENTRY);
+        for (String entry : entries) {
+          out.println(entry);
+        }
+        out.println();
+      }
+    }
+
+    @Override
+    public String getUsage() {
+      return USAGE + ":\n\n" + DESC;
+    }
+  }
+
 
   /**
    * Base class for remote config registry upload commands
    */
-  public abstract class RemoteRegistryUploadCommand extends Command {
-    protected static final String ROOT_ENTRY = "/knox";
-    protected static final String CONFIG_ENTRY = ROOT_ENTRY + "/config";
-    protected static final String PROVIDER_CONFIG_ENTRY = CONFIG_ENTRY + "/shared-providers";
-    protected static final String DESCRIPTORS__ENTRY = CONFIG_ENTRY + "/descriptors";
-
+  public abstract class RemoteRegistryUploadCommand extends RemoteRegistryCommand {
     private File sourceFile = null;
     protected String filename = null;
 
@@ -1928,21 +2006,13 @@ public class KnoxCLI extends Configured implements Tool {
     }
 
     protected void execute(String entryName, File sourceFile) throws Exception {
-      if (remoteRegistryClient != null) {
-        RemoteConfigurationRegistryClientService cs = getRemoteConfigRegistryClientService();
-        RemoteConfigurationRegistryClient client = cs.get(remoteRegistryClient);
-        if (client != null) {
-          if (entryName != null) {
-            upload(client, entryName, sourceFile);
-          }
-        } else {
-          out.println("No remote configuration registry identified by '" + remoteRegistryClient + "' could be found.");
+      RemoteConfigurationRegistryClient client = getClient();
+      if (client != null) {
+        if (entryName != null) {
+          upload(client, entryName, sourceFile);
         }
-      } else {
-        out.println("Missing required argument : --registry-client\n");
       }
     }
-
   }
 
 
@@ -1991,7 +2061,7 @@ public class KnoxCLI extends Configured implements Tool {
      */
     @Override
     public void execute() throws Exception {
-      super.execute(getEntryName(DESCRIPTORS__ENTRY), getSourceFile());
+      super.execute(getEntryName(DESCRIPTORS_ENTRY), getSourceFile());
     }
 
     /* (non-Javadoc)
@@ -2004,7 +2074,7 @@ public class KnoxCLI extends Configured implements Tool {
   }
 
 
-  public class RemoteRegistryGetACLCommand extends Command {
+  public class RemoteRegistryGetACLCommand extends RemoteRegistryCommand {
 
     static final String USAGE = "get-registry-acl entry --registry-client name";
     static final String DESC = "Presents the ACL settings for the specified remote registry entry.\n";
@@ -2020,21 +2090,14 @@ public class KnoxCLI extends Configured implements Tool {
      */
     @Override
     public void execute() throws Exception {
-      if (remoteRegistryClient != null) {
-        RemoteConfigurationRegistryClientService cs = getRemoteConfigRegistryClientService();
-        RemoteConfigurationRegistryClient client = cs.get(remoteRegistryClient);
-        if (client != null) {
-          if (entry != null) {
-            List<RemoteConfigurationRegistryClient.EntryACL> acls = client.getACL(entry);
-            for (RemoteConfigurationRegistryClient.EntryACL acl : acls) {
-              out.println(acl.getType() + ":" + acl.getId() + ":" + acl.getPermissions());
-            }
+      RemoteConfigurationRegistryClient client = getClient();
+      if (client != null) {
+        if (entry != null) {
+          List<RemoteConfigurationRegistryClient.EntryACL> acls = client.getACL(entry);
+          for (RemoteConfigurationRegistryClient.EntryACL acl : acls) {
+            out.println(acl.getType() + ":" + acl.getId() + ":" + acl.getPermissions());
           }
-        } else {
-          out.println("No remote configuration registry identified by '" + remoteRegistryClient + "' could be found.");
         }
-      } else {
-        out.println("Missing required argument : --registry-client\n");
       }
     }
 
@@ -2051,12 +2114,7 @@ public class KnoxCLI extends Configured implements Tool {
   /**
    * Base class for remote config registry delete commands
    */
-  public abstract class RemoteRegistryDeleteCommand extends Command {
-    protected static final String ROOT_ENTRY = "/knox";
-    protected static final String CONFIG_ENTRY = ROOT_ENTRY + "/config";
-    protected static final String PROVIDER_CONFIG_ENTRY = CONFIG_ENTRY + "/shared-providers";
-    protected static final String DESCRIPTORS__ENTRY = CONFIG_ENTRY + "/descriptors";
-
+  public abstract class RemoteRegistryDeleteCommand extends RemoteRegistryCommand {
     protected String entryName = null;
 
     protected RemoteRegistryDeleteCommand(String entryName) {
@@ -2071,18 +2129,11 @@ public class KnoxCLI extends Configured implements Tool {
     }
 
     protected void execute(String entryName) throws Exception {
-      if (remoteRegistryClient != null) {
-        RemoteConfigurationRegistryClientService cs = getRemoteConfigRegistryClientService();
-        RemoteConfigurationRegistryClient client = cs.get(remoteRegistryClient);
-        if (client != null) {
-          if (entryName != null) {
-            delete(client, entryName);
-          }
-        } else {
-          out.println("No remote configuration registry identified by '" + remoteRegistryClient + "' could be found.");
+      RemoteConfigurationRegistryClient client = getClient();
+      if (client != null) {
+        if (entryName != null) {
+          delete(client, entryName);
         }
-      } else {
-        out.println("Missing required argument : --registry-client\n");
       }
     }
   }
@@ -2118,7 +2169,7 @@ public class KnoxCLI extends Configured implements Tool {
 
     @Override
     public void execute() throws Exception {
-      execute(DESCRIPTORS__ENTRY + "/" + entryName);
+      execute(DESCRIPTORS_ENTRY + "/" + entryName);
     }
 
     @Override

http://git-wip-us.apache.org/repos/asf/knox/blob/7e03a9cf/gateway-server/src/test/java/org/apache/hadoop/gateway/util/KnoxCLITest.java
----------------------------------------------------------------------
diff --git a/gateway-server/src/test/java/org/apache/hadoop/gateway/util/KnoxCLITest.java b/gateway-server/src/test/java/org/apache/hadoop/gateway/util/KnoxCLITest.java
index 2d4586f..8b2c0d0 100644
--- a/gateway-server/src/test/java/org/apache/hadoop/gateway/util/KnoxCLITest.java
+++ b/gateway-server/src/test/java/org/apache/hadoop/gateway/util/KnoxCLITest.java
@@ -193,6 +193,14 @@ public class KnoxCLITest {
 
       // Validate the result
       assertEquals(0, rc);
+
+      outContent.reset();
+      final String[] listArgs = {"list-provider-configs", "--registry-client", "test_client"};
+      cli.run(listArgs);
+      String outStr =  outContent.toString().trim();
+      assertTrue(outStr.startsWith("Provider Configurations"));
+      assertTrue(outStr.endsWith(")\n"+providerConfigName));
+
       File registryFile = new File(testRegistry, "knox/config/shared-providers/" + providerConfigName);
       assertTrue(registryFile.exists());
       assertEquals(FileUtils.readFileToString(registryFile), providerConfigContent);
@@ -272,6 +280,14 @@ public class KnoxCLITest {
 
       // Validate the result
       assertEquals(0, rc);
+
+      outContent.reset();
+      final String[] listArgs = {"list-descriptors", "--registry-client", "test_client"};
+      cli.run(listArgs);
+      String outStr =  outContent.toString().trim();
+      assertTrue(outStr.startsWith("Descriptors"));
+      assertTrue(outStr.endsWith(")\n"+descriptorName));
+
       File registryFile = new File(testRegistry, "knox/config/descriptors/" + descriptorName);
       assertTrue(registryFile.exists());
       assertEquals(FileUtils.readFileToString(registryFile), descriptorContent);


[16/16] knox git commit: KNOX-998 - Merge from 0.14.0 master

Posted by mo...@apache.org.
KNOX-998 - Merge from 0.14.0 master


Project: http://git-wip-us.apache.org/repos/asf/knox/repo
Commit: http://git-wip-us.apache.org/repos/asf/knox/commit/92e2ec59
Tree: http://git-wip-us.apache.org/repos/asf/knox/tree/92e2ec59
Diff: http://git-wip-us.apache.org/repos/asf/knox/diff/92e2ec59

Branch: refs/heads/KNOX-998-Package_Restructuring
Commit: 92e2ec59a5940a9e7c67ec5cd29044f811dee40a
Parents: e5fd062
Author: Sandeep More <mo...@apache.org>
Authored: Tue Jan 9 14:51:08 2018 -0500
Committer: Sandeep More <mo...@apache.org>
Committed: Tue Jan 9 14:51:08 2018 -0500

----------------------------------------------------------------------
 .../discovery/ambari/ServiceURLCreator.java     | 32 --------
 .../discovery/ambari/ServiceURLFactory.java     | 75 -----------------
 .../discovery/ambari/WebHdfsUrlCreator.java     | 84 --------------------
 .../discovery/ambari/ServiceURLCreator.java     | 32 ++++++++
 .../discovery/ambari/ServiceURLFactory.java     | 75 +++++++++++++++++
 .../discovery/ambari/WebHdfsUrlCreator.java     | 84 ++++++++++++++++++++
 6 files changed, 191 insertions(+), 191 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/knox/blob/92e2ec59/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLCreator.java
----------------------------------------------------------------------
diff --git a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLCreator.java b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLCreator.java
deleted file mode 100644
index 8295155..0000000
--- a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLCreator.java
+++ /dev/null
@@ -1,32 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements. See the NOTICE file distributed with this
- * work for additional information regarding copyright ownership. The ASF
- * licenses this file to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- * <p>
- * http://www.apache.org/licenses/LICENSE-2.0
- * <p>
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- * License for the specific language governing permissions and limitations under
- * the License.
- */
-package org.apache.hadoop.gateway.topology.discovery.ambari;
-
-import java.util.List;
-
-public interface ServiceURLCreator {
-
-  /**
-   * Creates one or more cluster-specific URLs for the specified service.
-   *
-   * @param service The service identifier.
-   *
-   * @return A List of created URL strings; the list may be empty.
-   */
-  List<String> create(String service);
-
-}

http://git-wip-us.apache.org/repos/asf/knox/blob/92e2ec59/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLFactory.java
----------------------------------------------------------------------
diff --git a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLFactory.java b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLFactory.java
deleted file mode 100644
index fa9f89a..0000000
--- a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLFactory.java
+++ /dev/null
@@ -1,75 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements. See the NOTICE file distributed with this
- * work for additional information regarding copyright ownership. The ASF
- * licenses this file to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- * <p>
- * http://www.apache.org/licenses/LICENSE-2.0
- * <p>
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- * License for the specific language governing permissions and limitations under
- * the License.
- */
-package org.apache.hadoop.gateway.topology.discovery.ambari;
-
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-
-/**
- * Factory for creating cluster-specific service URLs.
- */
-public class ServiceURLFactory {
-
-  private Map<String, ServiceURLCreator> urlCreators = new HashMap<>();
-
-  private ServiceURLCreator defaultURLCreator = null;
-
-
-  private ServiceURLFactory(AmbariCluster cluster) {
-    // Default URL creator
-    defaultURLCreator = new AmbariDynamicServiceURLCreator(cluster);
-
-    // Custom (internal) URL creators
-    urlCreators.put("WEBHDFS", new WebHdfsUrlCreator(cluster));
-  }
-
-
-  /**
-   * Create a new factory for the specified cluster.
-   *
-   * @param cluster The cluster.
-   *
-   * @return A ServiceURLFactory instance.
-   */
-  public static ServiceURLFactory newInstance(AmbariCluster cluster) {
-    return new ServiceURLFactory(cluster);
-  }
-
-
-  /**
-   * Create one or more cluster-specific URLs for the specified service.
-   *
-   * @param service The service.
-   *
-   * @return A List of service URL strings; the list may be empty.
-   */
-  public List<String> create(String service) {
-    List<String> urls = new ArrayList<>();
-
-    ServiceURLCreator creator = urlCreators.get(service);
-    if (creator == null) {
-      creator = defaultURLCreator;
-    }
-
-    urls.addAll(creator.create(service));
-
-    return urls;
-  }
-
-}

http://git-wip-us.apache.org/repos/asf/knox/blob/92e2ec59/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/WebHdfsUrlCreator.java
----------------------------------------------------------------------
diff --git a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/WebHdfsUrlCreator.java b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/WebHdfsUrlCreator.java
deleted file mode 100644
index 1d11c66..0000000
--- a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/WebHdfsUrlCreator.java
+++ /dev/null
@@ -1,84 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements. See the NOTICE file distributed with this
- * work for additional information regarding copyright ownership. The ASF
- * licenses this file to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- * <p>
- * http://www.apache.org/licenses/LICENSE-2.0
- * <p>
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- * License for the specific language governing permissions and limitations under
- * the License.
- */
-package org.apache.hadoop.gateway.topology.discovery.ambari;
-
-import org.apache.hadoop.gateway.i18n.messages.MessagesFactory;
-
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Map;
-
-/**
- * A ServiceURLCreator implementation for WEBHDFS.
- */
-public class WebHdfsUrlCreator implements ServiceURLCreator {
-
-  private static final String SERVICE = "WEBHDFS";
-
-  private AmbariServiceDiscoveryMessages log = MessagesFactory.get(AmbariServiceDiscoveryMessages.class);
-
-  private AmbariCluster cluster = null;
-
-  WebHdfsUrlCreator(AmbariCluster cluster) {
-    this.cluster = cluster;
-  }
-
-  @Override
-  public List<String> create(String service) {
-    List<String> urls = new ArrayList<>();
-
-    if (SERVICE.equals(service)) {
-      AmbariCluster.ServiceConfiguration sc = cluster.getServiceConfiguration("HDFS", "hdfs-site");
-
-      // First, check if it's HA config
-      String nameServices = null;
-      AmbariComponent nameNodeComp = cluster.getComponent("NAMENODE");
-      if (nameNodeComp != null) {
-        nameServices = nameNodeComp.getConfigProperty("dfs.nameservices");
-      }
-
-      if (nameServices != null && !nameServices.isEmpty()) {
-        // If it is an HA configuration
-        Map<String, String> props = sc.getProperties();
-
-        // Name node HTTP addresses are defined as properties of the form:
-        //      dfs.namenode.http-address.<NAMESERVICES>.nn<INDEX>
-        // So, this iterates over the nn<INDEX> properties until there is no such property (since it cannot be known how
-        // many are defined by any other means).
-        int i = 1;
-        String propertyValue = getHANameNodeHttpAddress(props, nameServices, i++);
-        while (propertyValue != null) {
-          urls.add(createURL(propertyValue));
-          propertyValue = getHANameNodeHttpAddress(props, nameServices, i++);
-        }
-      } else { // If it's not an HA configuration, get the single name node HTTP address
-        urls.add(createURL(sc.getProperties().get("dfs.namenode.http-address")));
-      }
-    }
-
-    return urls;
-  }
-
-  private static String getHANameNodeHttpAddress(Map<String, String> props, String nameServices, int index) {
-    return props.get("dfs.namenode.http-address." + nameServices + ".nn" + index);
-  }
-
-  private static String createURL(String address) {
-    return "http://" + address + "/webhdfs";
-  }
-
-}

http://git-wip-us.apache.org/repos/asf/knox/blob/92e2ec59/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/ServiceURLCreator.java
----------------------------------------------------------------------
diff --git a/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/ServiceURLCreator.java b/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/ServiceURLCreator.java
new file mode 100644
index 0000000..c2a2d22
--- /dev/null
+++ b/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/ServiceURLCreator.java
@@ -0,0 +1,32 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.knox.gateway.topology.discovery.ambari;
+
+import java.util.List;
+
+public interface ServiceURLCreator {
+
+  /**
+   * Creates one or more cluster-specific URLs for the specified service.
+   *
+   * @param service The service identifier.
+   *
+   * @return A List of created URL strings; the list may be empty.
+   */
+  List<String> create(String service);
+
+}

http://git-wip-us.apache.org/repos/asf/knox/blob/92e2ec59/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/ServiceURLFactory.java
----------------------------------------------------------------------
diff --git a/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/ServiceURLFactory.java b/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/ServiceURLFactory.java
new file mode 100644
index 0000000..e009585
--- /dev/null
+++ b/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/ServiceURLFactory.java
@@ -0,0 +1,75 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.knox.gateway.topology.discovery.ambari;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Factory for creating cluster-specific service URLs.
+ */
+public class ServiceURLFactory {
+
+  private Map<String, ServiceURLCreator> urlCreators = new HashMap<>();
+
+  private ServiceURLCreator defaultURLCreator = null;
+
+
+  private ServiceURLFactory(AmbariCluster cluster) {
+    // Default URL creator
+    defaultURLCreator = new AmbariDynamicServiceURLCreator(cluster);
+
+    // Custom (internal) URL creators
+    urlCreators.put("WEBHDFS", new WebHdfsUrlCreator(cluster));
+  }
+
+
+  /**
+   * Create a new factory for the specified cluster.
+   *
+   * @param cluster The cluster.
+   *
+   * @return A ServiceURLFactory instance.
+   */
+  public static ServiceURLFactory newInstance(AmbariCluster cluster) {
+    return new ServiceURLFactory(cluster);
+  }
+
+
+  /**
+   * Create one or more cluster-specific URLs for the specified service.
+   *
+   * @param service The service.
+   *
+   * @return A List of service URL strings; the list may be empty.
+   */
+  public List<String> create(String service) {
+    List<String> urls = new ArrayList<>();
+
+    ServiceURLCreator creator = urlCreators.get(service);
+    if (creator == null) {
+      creator = defaultURLCreator;
+    }
+
+    urls.addAll(creator.create(service));
+
+    return urls;
+  }
+
+}

http://git-wip-us.apache.org/repos/asf/knox/blob/92e2ec59/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/WebHdfsUrlCreator.java
----------------------------------------------------------------------
diff --git a/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/WebHdfsUrlCreator.java b/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/WebHdfsUrlCreator.java
new file mode 100644
index 0000000..1c65982
--- /dev/null
+++ b/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/WebHdfsUrlCreator.java
@@ -0,0 +1,84 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.knox.gateway.topology.discovery.ambari;
+
+import org.apache.knox.gateway.i18n.messages.MessagesFactory;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * A ServiceURLCreator implementation for WEBHDFS.
+ */
+public class WebHdfsUrlCreator implements ServiceURLCreator {
+
+  private static final String SERVICE = "WEBHDFS";
+
+  private AmbariServiceDiscoveryMessages log = MessagesFactory.get(AmbariServiceDiscoveryMessages.class);
+
+  private AmbariCluster cluster = null;
+
+  WebHdfsUrlCreator(AmbariCluster cluster) {
+    this.cluster = cluster;
+  }
+
+  @Override
+  public List<String> create(String service) {
+    List<String> urls = new ArrayList<>();
+
+    if (SERVICE.equals(service)) {
+      AmbariCluster.ServiceConfiguration sc = cluster.getServiceConfiguration("HDFS", "hdfs-site");
+
+      // First, check if it's HA config
+      String nameServices = null;
+      AmbariComponent nameNodeComp = cluster.getComponent("NAMENODE");
+      if (nameNodeComp != null) {
+        nameServices = nameNodeComp.getConfigProperty("dfs.nameservices");
+      }
+
+      if (nameServices != null && !nameServices.isEmpty()) {
+        // If it is an HA configuration
+        Map<String, String> props = sc.getProperties();
+
+        // Name node HTTP addresses are defined as properties of the form:
+        //      dfs.namenode.http-address.<NAMESERVICES>.nn<INDEX>
+        // So, this iterates over the nn<INDEX> properties until there is no such property (since it cannot be known how
+        // many are defined by any other means).
+        int i = 1;
+        String propertyValue = getHANameNodeHttpAddress(props, nameServices, i++);
+        while (propertyValue != null) {
+          urls.add(createURL(propertyValue));
+          propertyValue = getHANameNodeHttpAddress(props, nameServices, i++);
+        }
+      } else { // If it's not an HA configuration, get the single name node HTTP address
+        urls.add(createURL(sc.getProperties().get("dfs.namenode.http-address")));
+      }
+    }
+
+    return urls;
+  }
+
+  private static String getHANameNodeHttpAddress(Map<String, String> props, String nameServices, int index) {
+    return props.get("dfs.namenode.http-address." + nameServices + ".nn" + index);
+  }
+
+  private static String createURL(String address) {
+    return "http://" + address + "/webhdfs";
+  }
+
+}


[05/16] knox git commit: KNOX-1144

Posted by mo...@apache.org.
KNOX-1144


Project: http://git-wip-us.apache.org/repos/asf/knox/repo
Commit: http://git-wip-us.apache.org/repos/asf/knox/commit/a438bcc1
Tree: http://git-wip-us.apache.org/repos/asf/knox/tree/a438bcc1
Diff: http://git-wip-us.apache.org/repos/asf/knox/diff/a438bcc1

Branch: refs/heads/KNOX-998-Package_Restructuring
Commit: a438bcc1e4dec613f078ecea25bb177a3904a3a7
Parents: 7e03a9c
Author: Phil Zampino <pz...@gmail.com>
Authored: Tue Dec 12 10:11:25 2017 -0500
Committer: Phil Zampino <pz...@apache.org>
Committed: Wed Jan 3 14:06:29 2018 -0500

----------------------------------------------------------------------
 .../topology/impl/DefaultTopologyService.java   | 40 +++++++++++++++-----
 .../DefaultRemoteConfigurationMonitor.java      | 22 ++++++++++-
 .../ZooKeeperConfigurationMonitorTest.java      | 17 ++++++++-
 3 files changed, 65 insertions(+), 14 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/knox/blob/a438bcc1/gateway-server/src/main/java/org/apache/hadoop/gateway/services/topology/impl/DefaultTopologyService.java
----------------------------------------------------------------------
diff --git a/gateway-server/src/main/java/org/apache/hadoop/gateway/services/topology/impl/DefaultTopologyService.java b/gateway-server/src/main/java/org/apache/hadoop/gateway/services/topology/impl/DefaultTopologyService.java
index aded6cd..398f3e9 100644
--- a/gateway-server/src/main/java/org/apache/hadoop/gateway/services/topology/impl/DefaultTopologyService.java
+++ b/gateway-server/src/main/java/org/apache/hadoop/gateway/services/topology/impl/DefaultTopologyService.java
@@ -52,6 +52,8 @@ import org.apache.hadoop.gateway.topology.builder.TopologyBuilder;
 import org.apache.hadoop.gateway.topology.discovery.ClusterConfigurationMonitor;
 import org.apache.hadoop.gateway.topology.monitor.RemoteConfigurationMonitor;
 import org.apache.hadoop.gateway.topology.monitor.RemoteConfigurationMonitorFactory;
+import org.apache.hadoop.gateway.topology.simple.SimpleDescriptor;
+import org.apache.hadoop.gateway.topology.simple.SimpleDescriptorFactory;
 import org.apache.hadoop.gateway.topology.simple.SimpleDescriptorHandler;
 import org.apache.hadoop.gateway.topology.validation.TopologyValidator;
 import org.apache.hadoop.gateway.topology.xml.AmbariFormatXmlTopologyRules;
@@ -592,18 +594,39 @@ public class DefaultTopologyService
       initListener(sharedProvidersDirectory, spm, spm);
       log.monitoringProviderConfigChangesInDirectory(sharedProvidersDirectory.getAbsolutePath());
 
-      // For all the descriptors currently in the descriptors dir at start-up time, trigger topology generation.
+      // For all the descriptors currently in the descriptors dir at start-up time, determine if topology regeneration
+      // is required.
       // This happens prior to the start-up loading of the topologies.
       String[] descriptorFilenames =  descriptorsDirectory.list();
       if (descriptorFilenames != null) {
         for (String descriptorFilename : descriptorFilenames) {
           if (DescriptorsMonitor.isDescriptorFile(descriptorFilename)) {
+            String topologyName = FilenameUtils.getBaseName(descriptorFilename);
+            File existingDescriptorFile = getExistingFile(descriptorsDirectory, topologyName);
+
             // If there isn't a corresponding topology file, or if the descriptor has been modified since the
             // corresponding topology file was generated, then trigger generation of one
-            File matchingTopologyFile = getExistingFile(topologiesDirectory, FilenameUtils.getBaseName(descriptorFilename));
-            if (matchingTopologyFile == null ||
-                    matchingTopologyFile.lastModified() < (new File(descriptorsDirectory, descriptorFilename)).lastModified()) {
-              descriptorsMonitor.onFileChange(new File(descriptorsDirectory, descriptorFilename));
+            File matchingTopologyFile = getExistingFile(topologiesDirectory, topologyName);
+            if (matchingTopologyFile == null || matchingTopologyFile.lastModified() < existingDescriptorFile.lastModified()) {
+              descriptorsMonitor.onFileChange(existingDescriptorFile);
+            } else {
+              // If regeneration is NOT required, then we at least need to report the provider configuration
+              // reference relationship (KNOX-1144)
+              String normalizedDescriptorPath = FilenameUtils.normalize(existingDescriptorFile.getAbsolutePath());
+
+              // Parse the descriptor to determine the provider config reference
+              SimpleDescriptor sd = SimpleDescriptorFactory.parse(normalizedDescriptorPath);
+              if (sd != null) {
+                File referencedProviderConfig =
+                           getExistingFile(sharedProvidersDirectory, FilenameUtils.getBaseName(sd.getProviderConfig()));
+                if (referencedProviderConfig != null) {
+                  List<String> references =
+                         descriptorsMonitor.getReferencingDescriptors(referencedProviderConfig.getAbsolutePath());
+                  if (!references.contains(normalizedDescriptorPath)) {
+                    references.add(normalizedDescriptorPath);
+                  }
+                }
+              }
             }
           }
         }
@@ -711,11 +734,8 @@ public class DefaultTopologyService
     }
 
     List<String> getReferencingDescriptors(String providerConfigPath) {
-      List<String> result = providerConfigReferences.get(FilenameUtils.normalize(providerConfigPath));
-      if (result == null) {
-        result = Collections.emptyList();
-      }
-      return result;
+      String normalizedPath = FilenameUtils.normalize(providerConfigPath);
+      return providerConfigReferences.computeIfAbsent(normalizedPath, p -> new ArrayList<>());
     }
 
     @Override

http://git-wip-us.apache.org/repos/asf/knox/blob/a438bcc1/gateway-server/src/main/java/org/apache/hadoop/gateway/topology/monitor/DefaultRemoteConfigurationMonitor.java
----------------------------------------------------------------------
diff --git a/gateway-server/src/main/java/org/apache/hadoop/gateway/topology/monitor/DefaultRemoteConfigurationMonitor.java b/gateway-server/src/main/java/org/apache/hadoop/gateway/topology/monitor/DefaultRemoteConfigurationMonitor.java
index af60058..3bf3330 100644
--- a/gateway-server/src/main/java/org/apache/hadoop/gateway/topology/monitor/DefaultRemoteConfigurationMonitor.java
+++ b/gateway-server/src/main/java/org/apache/hadoop/gateway/topology/monitor/DefaultRemoteConfigurationMonitor.java
@@ -29,6 +29,7 @@ import org.apache.zookeeper.ZooDefs;
 import java.io.File;
 import java.io.IOException;
 import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.Collections;
 import java.util.List;
 
@@ -112,6 +113,19 @@ class DefaultRemoteConfigurationMonitor implements RemoteConfigurationMonitor {
         if (providerConfigs == null) {
             // Either the ZNode does not exist, or there is an authentication problem
             throw new IllegalStateException("Unable to access remote path: " + NODE_KNOX_PROVIDERS);
+        } else {
+            // Download any existing provider configs in the remote registry, which either do not exist locally, or have
+            // been modified, so that they are certain to be present when this monitor downloads any descriptors that
+            // reference them.
+            for (String providerConfig : providerConfigs) {
+                File localFile = new File(providersDir, providerConfig);
+
+                byte[] remoteContent = client.getEntryData(NODE_KNOX_PROVIDERS + "/" + providerConfig).getBytes();
+                if (!localFile.exists() || !Arrays.equals(remoteContent, FileUtils.readFileToByteArray(localFile))) {
+                    FileUtils.writeByteArrayToFile(localFile, remoteContent);
+                    log.downloadedRemoteConfigFile(providersDir.getName(), providerConfig);
+                }
+            }
         }
 
         // Confirm access to the remote descriptors directory znode
@@ -213,8 +227,12 @@ class DefaultRemoteConfigurationMonitor implements RemoteConfigurationMonitor {
             File localFile = new File(localDir, path.substring(path.lastIndexOf("/")));
             if (data != null) {
                 try {
-                    FileUtils.writeByteArrayToFile(localFile, data);
-                    log.downloadedRemoteConfigFile(localDir.getName(), localFile.getName());
+                    // If there is no corresponding local file, or the content is different from the existing local
+                    // file, write the data to the local file.
+                    if (!localFile.exists() || !Arrays.equals(FileUtils.readFileToByteArray(localFile), data)) {
+                        FileUtils.writeByteArrayToFile(localFile, data);
+                        log.downloadedRemoteConfigFile(localDir.getName(), localFile.getName());
+                    }
                 } catch (IOException e) {
                     log.errorDownloadingRemoteConfiguration(path, e);
                 }

http://git-wip-us.apache.org/repos/asf/knox/blob/a438bcc1/gateway-server/src/test/java/org/apache/hadoop/gateway/topology/monitor/ZooKeeperConfigurationMonitorTest.java
----------------------------------------------------------------------
diff --git a/gateway-server/src/test/java/org/apache/hadoop/gateway/topology/monitor/ZooKeeperConfigurationMonitorTest.java b/gateway-server/src/test/java/org/apache/hadoop/gateway/topology/monitor/ZooKeeperConfigurationMonitorTest.java
index 1c4ed6e..ecf5b70 100644
--- a/gateway-server/src/test/java/org/apache/hadoop/gateway/topology/monitor/ZooKeeperConfigurationMonitorTest.java
+++ b/gateway-server/src/test/java/org/apache/hadoop/gateway/topology/monitor/ZooKeeperConfigurationMonitorTest.java
@@ -113,10 +113,10 @@ public class ZooKeeperConfigurationMonitorTest {
 
         client.create().creatingParentsIfNeeded().withMode(CreateMode.PERSISTENT).withACL(acls).forPath(PATH_KNOX_DESCRIPTORS);
         assertNotNull("Failed to create node:" + PATH_KNOX_DESCRIPTORS,
-                client.checkExists().forPath(PATH_KNOX_DESCRIPTORS));
+                      client.checkExists().forPath(PATH_KNOX_DESCRIPTORS));
         client.create().creatingParentsIfNeeded().withMode(CreateMode.PERSISTENT).withACL(acls).forPath(PATH_KNOX_PROVIDERS);
         assertNotNull("Failed to create node:" + PATH_KNOX_PROVIDERS,
-                client.checkExists().forPath(PATH_KNOX_PROVIDERS));
+                      client.checkExists().forPath(PATH_KNOX_PROVIDERS));
     }
 
     @AfterClass
@@ -164,12 +164,25 @@ public class ZooKeeperConfigurationMonitorTest {
 
         DefaultRemoteConfigurationMonitor cm = new DefaultRemoteConfigurationMonitor(gc, clientService);
 
+        // Create a provider configuration in the test ZK, prior to starting the monitor, to make sure that the monitor
+        // will download existing entries upon starting.
+        final String preExistingProviderConfig = getProviderPath("pre-existing-providers.xml");
+        client.create().withMode(CreateMode.PERSISTENT).forPath(preExistingProviderConfig,
+                                                                TEST_PROVIDERS_CONFIG_1.getBytes());
+        File preExistingProviderConfigLocalFile = new File(providersDir, "pre-existing-providers.xml");
+        assertFalse("This file should not exist locally prior to monitor starting.",
+                    preExistingProviderConfigLocalFile.exists());
+
         try {
             cm.start();
         } catch (Exception e) {
             fail("Failed to start monitor: " + e.getMessage());
         }
 
+        assertTrue("This file should exist locally immediately after monitor starting.",
+                    preExistingProviderConfigLocalFile.exists());
+
+
         try {
             final String pc_one_znode = getProviderPath("providers-config1.xml");
             final File pc_one         = new File(providersDir, "providers-config1.xml");


[12/16] knox git commit: Merge branch 'master' into KNOX-998-Package_Restructuring

Posted by mo...@apache.org.
http://git-wip-us.apache.org/repos/asf/knox/blob/e5fd0622/gateway-server/src/main/java/org/apache/knox/gateway/util/KnoxCLI.java
----------------------------------------------------------------------
diff --cc gateway-server/src/main/java/org/apache/knox/gateway/util/KnoxCLI.java
index 928c37e,0000000..a987433
mode 100644,000000..100644
--- a/gateway-server/src/main/java/org/apache/knox/gateway/util/KnoxCLI.java
+++ b/gateway-server/src/main/java/org/apache/knox/gateway/util/KnoxCLI.java
@@@ -1,2154 -1,0 +1,2205 @@@
 +/**
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.knox.gateway.util;
 +
 +import java.io.BufferedReader;
 +import java.io.Console;
 +import java.io.File;
 +import java.io.IOException;
 +import java.io.InputStream;
 +import java.io.InputStreamReader;
 +import java.io.PrintStream;
 +import java.net.InetAddress;
 +import java.net.UnknownHostException;
 +import java.security.cert.Certificate;
 +import java.util.Arrays;
 +import java.util.HashMap;
 +import java.util.HashSet;
 +import java.util.List;
 +import java.util.Map;
 +import java.util.Properties;
 +import java.util.UUID;
 +import javax.net.ssl.SSLContext;
 +import javax.net.ssl.SSLException;
 +
 +import org.apache.commons.codec.binary.Base64;
 +import org.apache.commons.io.FileUtils;
 +import org.apache.hadoop.conf.Configuration;
 +import org.apache.hadoop.conf.Configured;
 +import org.apache.knox.gateway.GatewayCommandLine;
 +import org.apache.knox.gateway.config.GatewayConfig;
 +import org.apache.knox.gateway.config.impl.GatewayConfigImpl;
 +import org.apache.knox.gateway.deploy.DeploymentFactory;
 +import org.apache.knox.gateway.services.CLIGatewayServices;
 +import org.apache.knox.gateway.services.GatewayServices;
 +import org.apache.knox.gateway.services.Service;
 +import org.apache.knox.gateway.services.ServiceLifecycleException;
 +import org.apache.knox.gateway.services.config.client.RemoteConfigurationRegistryClient;
 +import org.apache.knox.gateway.services.config.client.RemoteConfigurationRegistryClientService;
 +import org.apache.knox.gateway.services.security.AliasService;
 +import org.apache.knox.gateway.services.security.KeystoreService;
 +import org.apache.knox.gateway.services.security.KeystoreServiceException;
 +import org.apache.knox.gateway.services.security.MasterService;
 +import org.apache.knox.gateway.services.security.impl.X509CertificateUtil;
 +import org.apache.knox.gateway.services.topology.TopologyService;
 +import org.apache.knox.gateway.topology.Provider;
 +import org.apache.knox.gateway.topology.Topology;
 +import org.apache.knox.gateway.topology.validation.TopologyValidator;
 +import org.apache.hadoop.util.Tool;
 +import org.apache.hadoop.util.ToolRunner;
 +import org.apache.http.client.ClientProtocolException;
 +import org.apache.http.client.methods.CloseableHttpResponse;
 +import org.apache.http.client.methods.HttpGet;
 +import org.apache.http.conn.ssl.SSLContexts;
 +import org.apache.http.conn.ssl.TrustSelfSignedStrategy;
 +import org.apache.http.impl.client.CloseableHttpClient;
 +import org.apache.http.impl.client.HttpClients;
 +import org.apache.log4j.PropertyConfigurator;
 +import org.apache.shiro.SecurityUtils;
 +import org.apache.shiro.authc.AuthenticationException;
 +import org.apache.shiro.authc.UsernamePasswordToken;
 +import org.apache.shiro.config.ConfigurationException;
 +import org.apache.shiro.config.Ini;
 +import org.apache.shiro.config.IniSecurityManagerFactory;
 +import org.apache.shiro.subject.Subject;
 +import org.apache.shiro.util.Factory;
 +import org.apache.shiro.util.ThreadContext;
 +import org.eclipse.persistence.oxm.MediaType;
 +import org.jboss.shrinkwrap.api.exporter.ExplodedExporter;
 +import org.jboss.shrinkwrap.api.spec.EnterpriseArchive;
 +
 +/**
 + *
 + */
 +public class KnoxCLI extends Configured implements Tool {
 +
 +  private static final String USAGE_PREFIX = "KnoxCLI {cmd} [options]";
 +  static final private String COMMANDS =
 +      "   [--help]\n" +
 +      "   [" + VersionCommand.USAGE + "]\n" +
 +      "   [" + MasterCreateCommand.USAGE + "]\n" +
 +      "   [" + CertCreateCommand.USAGE + "]\n" +
 +      "   [" + CertExportCommand.USAGE + "]\n" +
 +      "   [" + AliasCreateCommand.USAGE + "]\n" +
 +      "   [" + AliasDeleteCommand.USAGE + "]\n" +
 +      "   [" + AliasListCommand.USAGE + "]\n" +
 +      "   [" + RedeployCommand.USAGE + "]\n" +
 +      "   [" + ListTopologiesCommand.USAGE + "]\n" +
 +      "   [" + ValidateTopologyCommand.USAGE + "]\n" +
 +      "   [" + LDAPAuthCommand.USAGE + "]\n" +
 +      "   [" + LDAPSysBindCommand.USAGE + "]\n" +
 +      "   [" + ServiceTestCommand.USAGE + "]\n" +
 +      "   [" + RemoteRegistryClientsListCommand.USAGE + "]\n" +
++      "   [" + RemoteRegistryListProviderConfigsCommand.USAGE + "]\n" +
 +      "   [" + RemoteRegistryUploadProviderConfigCommand.USAGE + "]\n" +
++      "   [" + RemoteRegistryListDescriptorsCommand.USAGE + "]\n" +
 +      "   [" + RemoteRegistryUploadDescriptorCommand.USAGE + "]\n" +
 +      "   [" + RemoteRegistryDeleteProviderConfigCommand.USAGE + "]\n" +
 +      "   [" + RemoteRegistryDeleteDescriptorCommand.USAGE + "]\n" +
 +      "   [" + RemoteRegistryGetACLCommand.USAGE + "]\n";
 +
 +  /** allows stdout to be captured if necessary */
 +  public PrintStream out = System.out;
 +  /** allows stderr to be captured if necessary */
 +  public PrintStream err = System.err;
 +
 +  private static GatewayServices services = new CLIGatewayServices();
 +  private Command command;
 +  private String value = null;
 +  private String cluster = null;
 +  private String path = null;
 +  private String generate = "false";
 +  private String hostname = null;
 +  private String port = null;
 +  private boolean force = false;
 +  private boolean debug = false;
 +  private String user = null;
 +  private String pass = null;
 +  private boolean groups = false;
 +
 +  private String remoteRegistryClient = null;
 +  private String remoteRegistryEntryName = null;
 +
 +  // For testing only
 +  private String master = null;
 +  private String type = null;
 +
 +  /* (non-Javadoc)
 +   * @see org.apache.hadoop.util.Tool#run(java.lang.String[])
 +   */
 +  @Override
 +  public int run(String[] args) throws Exception {
 +    int exitCode = 0;
 +    try {
 +      exitCode = init(args);
 +      if (exitCode != 0) {
 +        return exitCode;
 +      }
 +      if (command != null && command.validate()) {
 +        initializeServices( command instanceof MasterCreateCommand );
 +        command.execute();
 +      } else if (!(command instanceof MasterCreateCommand)){
 +        out.println("ERROR: Invalid Command" + "\n" + "Unrecognized option:" +
 +            args[0] + "\n" +
 +            "A fatal exception has occurred. Program will exit.");
 +        exitCode = -2;
 +      }
 +    } catch (ServiceLifecycleException sle) {
 +      out.println("ERROR: Internal Error: Please refer to the knoxcli.log " +
 +          "file for details. " + sle.getMessage());
 +    } catch (Exception e) {
 +      e.printStackTrace( err );
 +      err.flush();
 +      return -3;
 +    }
 +    return exitCode;
 +  }
 +
 +  GatewayServices getGatewayServices() {
 +    return services;
 +  }
 +
 +  private void initializeServices(boolean persisting) throws ServiceLifecycleException {
 +    GatewayConfig config = getGatewayConfig();
 +    Map<String,String> options = new HashMap<>();
 +    options.put(GatewayCommandLine.PERSIST_LONG, Boolean.toString(persisting));
 +    if (master != null) {
 +      options.put("master", master);
 +    }
 +    services.init(config, options);
 +  }
 +
 +  /**
 +   * Parse the command line arguments and initialize the data
 +   * <pre>
 +   * % knoxcli version
 +   * % knoxcli list-topologies
 +   * % knoxcli master-create keyName [--size size] [--generate]
 +   * % knoxcli create-alias alias [--cluster clustername] [--generate] [--value v]
 +   * % knoxcli list-alias [--cluster clustername]
 +   * % knoxcli delete=alias alias [--cluster clustername]
 +   * % knoxcli create-cert alias [--hostname h]
 +   * % knoxcli redeploy [--cluster clustername]
 +   * % knoxcli validate-topology [--cluster clustername] | [--path <path/to/file>]
 +   * % knoxcli user-auth-test [--cluster clustername] [--u username] [--p password]
 +   * % knoxcli system-user-auth-test [--cluster clustername] [--d]
 +   * % knoxcli service-test [--u user] [--p password] [--cluster clustername] [--hostname name] [--port port]
 +   * % knoxcli list-registry-clients
 +   * % knoxcli get-registry-acl entryName --registry-client name
++   * % knoxcli list-provider-configs --registry-client
 +   * % knoxcli upload-provider-config filePath --registry-client name [--entry-name entryName]
++   * % knoxcli list-descriptors --registry-client
 +   * % knoxcli upload-descriptor filePath --registry-client name [--entry-name entryName]
 +   * % knoxcli delete-provider-config providerConfig --registry-client name
 +   * % knoxcli delete-descriptor descriptor --registry-client name
 +   * </pre>
 +   * @param args
 +   * @return
 +   * @throws IOException
 +   */
 +  private int init(String[] args) throws IOException {
 +    if (args.length == 0) {
 +      printKnoxShellUsage();
 +      return -1;
 +    }
 +    for (int i = 0; i < args.length; i++) { // parse command line
 +      if (args[i].equals("create-master")) {
 +        command = new MasterCreateCommand();
 +        if ((args.length > i + 1) && args[i + 1].equals("--help")) {
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +      } else if (args[i].equals("delete-alias")) {
 +        String alias = null;
 +        if (args.length >= 2) {
 +          alias = args[++i];
 +        }
 +        command = new AliasDeleteCommand(alias);
 +        if (alias == null || alias.equals("--help")) {
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +      } else if (args[i].equals("create-alias")) {
 +        String alias = null;
 +        if (args.length >= 2) {
 +          alias = args[++i];
 +        }
 +        command = new AliasCreateCommand(alias);
 +        if (alias == null || alias.equals("--help")) {
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +      } else if (args[i].equals("create-cert")) {
 +        command = new CertCreateCommand();
 +        if ((args.length > i + 1) && args[i + 1].equals("--help")) {
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +      } else if (args[i].equals("export-cert")) {
 +        command = new CertExportCommand();
 +        if ((args.length > i + 1) && args[i + 1].equals("--help")) {
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +      }else if(args[i].equals("user-auth-test")) {
 +        if(i + 1 >= args.length) {
 +          printKnoxShellUsage();
 +          return -1;
 +        } else {
 +          command = new LDAPAuthCommand();
 +        }
 +      } else if(args[i].equals("system-user-auth-test")) {
 +        if (i + 1 >= args.length){
 +          printKnoxShellUsage();
 +          return -1;
 +        } else {
 +          command = new LDAPSysBindCommand();
 +        }
 +      } else if (args[i].equals("list-alias")) {
 +        command = new AliasListCommand();
 +      } else if (args[i].equals("--value")) {
 +        if( i+1 >= args.length || args[i+1].startsWith( "-" ) ) {
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +        this.value = args[++i];
 +        if ( command != null && command instanceof MasterCreateCommand ) {
 +          this.master = this.value;
 +        }
 +      } else if ( args[i].equals("version") ) {
 +        command = new VersionCommand();
 +      } else if ( args[i].equals("redeploy") ) {
 +        command = new RedeployCommand();
 +      } else if ( args[i].equals("validate-topology") ) {
 +        if(i + 1 >= args.length) {
 +          printKnoxShellUsage();
 +          return -1;
 +        } else {
 +          command = new ValidateTopologyCommand();
 +        }
 +      } else if( args[i].equals("list-topologies") ){
 +        command = new ListTopologiesCommand();
 +      }else if ( args[i].equals("--cluster") || args[i].equals("--topology") ) {
 +        if( i+1 >= args.length || args[i+1].startsWith( "-" ) ) {
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +        this.cluster = args[++i];
 +      } else if (args[i].equals("service-test")) {
 +        if( i + 1 >= args.length) {
 +          printKnoxShellUsage();
 +          return -1;
 +        } else {
 +          command = new ServiceTestCommand();
 +        }
 +      } else if (args[i].equals("--generate")) {
 +        if ( command != null && command instanceof MasterCreateCommand ) {
 +          this.master = UUID.randomUUID().toString();
 +        } else {
 +          this.generate = "true";
 +        }
 +      } else if(args[i].equals("--type")) {
 +        if( i+1 >= args.length || args[i+1].startsWith( "-" ) ) {
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +        this.type = args[++i];
 +      } else if(args[i].equals("--path")) {
 +        if( i+1 >= args.length || args[i+1].startsWith( "-" ) ) {
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +        this.path = args[++i];
 +      }else if (args[i].equals("--hostname")) {
 +        if( i+1 >= args.length || args[i+1].startsWith( "-" ) ) {
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +        this.hostname = args[++i];
 +      } else if (args[i].equals("--port")) {
 +        if( i+1 >= args.length || args[i+1].startsWith( "-" ) ) {
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +        this.port = args[++i];
 +      } else if (args[i].equals("--master")) {
 +        // For testing only
 +        if( i+1 >= args.length || args[i+1].startsWith( "-" ) ) {
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +        this.master = args[++i];
 +      } else if (args[i].equals("--force")) {
 +        this.force = true;
 +      } else if (args[i].equals("--help")) {
 +        printKnoxShellUsage();
 +        return -1;
 +      } else if(args[i].equals("--d")) {
 +        this.debug = true;
 +      } else if(args[i].equals("--u")) {
 +        if(i + 1 <= args.length) {
 +          this.user = args[++i];
 +        } else{
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +      } else if(args[i].equals("--p")) {
 +        if(i + 1 <= args.length) {
 +          this.pass = args[++i];
 +        } else{
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +      } else if (args[i].equals("--g")) {
 +        this.groups = true;
 +      } else if (args[i].equals("list-registry-clients")) {
 +        command = new RemoteRegistryClientsListCommand();
 +      } else if (args[i].equals("--registry-client")) {
 +        if (i + 1 >= args.length || args[i + 1].startsWith("-")) {
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +        this.remoteRegistryClient = args[++i];
++      } else if (args[i].equalsIgnoreCase("list-provider-configs")) {
++        command = new RemoteRegistryListProviderConfigsCommand();
++      } else if (args[i].equalsIgnoreCase("list-descriptors")) {
++        command = new RemoteRegistryListDescriptorsCommand();
 +      } else if (args[i].equalsIgnoreCase("upload-provider-config")) {
 +        String fileName;
 +        if (i <= (args.length - 1)) {
 +          fileName = args[++i];
 +          command = new RemoteRegistryUploadProviderConfigCommand(fileName);
 +        } else {
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +      } else if (args[i].equals("upload-descriptor")) {
 +        String fileName;
 +        if (i <= (args.length - 1)) {
 +          fileName = args[++i];
 +          command = new RemoteRegistryUploadDescriptorCommand(fileName);
 +        } else {
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +      } else if (args[i].equals("--entry-name")) {
 +        if (i <= (args.length - 1)) {
 +          remoteRegistryEntryName = args[++i];
 +        } else {
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +      } else if (args[i].equals("delete-descriptor")) {
 +        if (i <= (args.length - 1)) {
 +          String entry = args[++i];
 +          command = new RemoteRegistryDeleteDescriptorCommand(entry);
 +        } else {
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +      } else if (args[i].equals("delete-provider-config")) {
 +        if (i <= (args.length - 1)) {
 +          String entry = args[++i];
 +          command = new RemoteRegistryDeleteProviderConfigCommand(entry);
 +        } else {
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +      } else if (args[i].equalsIgnoreCase("get-registry-acl")) {
 +        if (i <= (args.length - 1)) {
 +          String entry = args[++i];
 +          command = new RemoteRegistryGetACLCommand(entry);
 +        } else {
 +          printKnoxShellUsage();
 +          return -1;
 +        }
 +      } else {
 +        printKnoxShellUsage();
 +        //ToolRunner.printGenericCommandUsage(System.err);
 +        return -1;
 +      }
 +    }
 +    return 0;
 +  }
 +
 +  private void printKnoxShellUsage() {
 +    out.println( USAGE_PREFIX + "\n" + COMMANDS );
 +    if ( command != null ) {
 +      out.println(command.getUsage());
 +    } else {
 +      char[] chars = new char[79];
 +      Arrays.fill( chars, '=' );
 +      String div = new String( chars );
 +
 +      out.println( div );
 +      out.println( VersionCommand.USAGE + "\n\n" + VersionCommand.DESC );
 +      out.println();
 +      out.println( div );
 +      out.println( MasterCreateCommand.USAGE + "\n\n" + MasterCreateCommand.DESC );
 +      out.println();
 +      out.println( div );
 +      out.println( CertCreateCommand.USAGE + "\n\n" + CertCreateCommand.DESC );
 +      out.println();
 +      out.println( div );
 +      out.println( CertExportCommand.USAGE + "\n\n" + CertExportCommand.DESC );
 +      out.println();
 +      out.println( div );
 +      out.println( AliasCreateCommand.USAGE + "\n\n" + AliasCreateCommand.DESC );
 +      out.println();
 +      out.println( div );
 +      out.println( AliasDeleteCommand.USAGE + "\n\n" + AliasDeleteCommand.DESC );
 +      out.println();
 +      out.println( div );
 +      out.println( AliasListCommand.USAGE + "\n\n" + AliasListCommand.DESC );
 +      out.println();
 +      out.println( div );
 +      out.println( RedeployCommand.USAGE + "\n\n" + RedeployCommand.DESC );
 +      out.println();
 +      out.println( div );
 +      out.println(ValidateTopologyCommand.USAGE + "\n\n" + ValidateTopologyCommand.DESC);
 +      out.println();
 +      out.println( div );
 +      out.println(ListTopologiesCommand.USAGE + "\n\n" + ListTopologiesCommand.DESC);
 +      out.println();
 +      out.println( div );
 +      out.println(LDAPAuthCommand.USAGE + "\n\n" + LDAPAuthCommand.DESC);
 +      out.println();
 +      out.println( div );
 +      out.println(LDAPSysBindCommand.USAGE + "\n\n" + LDAPSysBindCommand.DESC);
 +      out.println();
 +      out.println( div );
 +      out.println(ServiceTestCommand.USAGE + "\n\n" + ServiceTestCommand.DESC);
 +      out.println();
 +      out.println( div );
 +      out.println(RemoteRegistryClientsListCommand.USAGE + "\n\n" + RemoteRegistryClientsListCommand.DESC);
 +      out.println();
 +      out.println( div );
 +      out.println(RemoteRegistryGetACLCommand.USAGE + "\n\n" + RemoteRegistryGetACLCommand.DESC);
 +      out.println();
 +      out.println( div );
++      out.println(RemoteRegistryListProviderConfigsCommand.USAGE + "\n\n" + RemoteRegistryListProviderConfigsCommand.DESC);
++      out.println();
++      out.println( div );
++      out.println(RemoteRegistryListDescriptorsCommand.USAGE + "\n\n" + RemoteRegistryListDescriptorsCommand.DESC);
++      out.println();
++      out.println( div );
 +      out.println(RemoteRegistryUploadProviderConfigCommand.USAGE + "\n\n" + RemoteRegistryUploadProviderConfigCommand.DESC);
 +      out.println();
 +      out.println( div );
 +      out.println(RemoteRegistryUploadDescriptorCommand.USAGE + "\n\n" + RemoteRegistryUploadDescriptorCommand.DESC);
 +      out.println();
 +      out.println( div );
 +      out.println(RemoteRegistryDeleteProviderConfigCommand.USAGE + "\n\n" + RemoteRegistryDeleteProviderConfigCommand.DESC);
 +      out.println();
 +      out.println( div );
 +      out.println(RemoteRegistryDeleteDescriptorCommand.USAGE + "\n\n" + RemoteRegistryDeleteDescriptorCommand.DESC);
 +      out.println();
 +      out.println( div );
 +    }
 +  }
 +
 +  private abstract class Command {
 +
 +    public boolean validate() {
 +      return true;
 +    }
 +
 +    protected Service getService(String serviceName) {
 +      Service service = null;
 +
 +      return service;
 +    }
 +
 +    public abstract void execute() throws Exception;
 +
 +    public abstract String getUsage();
 +
 +    protected AliasService getAliasService() {
 +      AliasService as = services.getService(GatewayServices.ALIAS_SERVICE);
 +      return as;
 +    }
 +
 +    protected KeystoreService getKeystoreService() {
 +      KeystoreService ks = services.getService(GatewayServices.KEYSTORE_SERVICE);
 +      return ks;
 +    }
 +
 +    protected TopologyService getTopologyService()  {
 +      TopologyService ts = services.getService(GatewayServices.TOPOLOGY_SERVICE);
 +      return ts;
 +    }
 +
 +    protected RemoteConfigurationRegistryClientService getRemoteConfigRegistryClientService() {
 +      return services.getService(GatewayServices.REMOTE_REGISTRY_CLIENT_SERVICE);
 +    }
 +
 +  }
 +
 + private class AliasListCommand extends Command {
 +
 +  public static final String USAGE = "list-alias [--cluster clustername]";
 +  public static final String DESC = "The list-alias command lists all of the aliases\n" +
 +                                    "for the given hadoop --cluster. The default\n" +
 +                                    "--cluster being the gateway itself.";
 +
 +   /* (non-Javadoc)
 +    * @see KnoxCLI.Command#execute()
 +    */
 +   @Override
 +   public void execute() throws Exception {
 +     AliasService as = getAliasService();
 +      KeystoreService keystoreService = getKeystoreService();
 +
 +     if (cluster == null) {
 +       cluster = "__gateway";
 +     }
 +      boolean credentialStoreForClusterAvailable =
 +          keystoreService.isCredentialStoreForClusterAvailable(cluster);
 +      if (credentialStoreForClusterAvailable) {
 +        out.println("Listing aliases for: " + cluster);
 +        List<String> aliases = as.getAliasesForCluster(cluster);
 +        for (String alias : aliases) {
 +          out.println(alias);
 +        }
 +        out.println("\n" + aliases.size() + " items.");
 +      } else {
 +        out.println("Invalid cluster name provided: " + cluster);
 +      }
 +   }
 +
 +   /* (non-Javadoc)
 +    * @see org.apache.knox.gateway.util.KnoxCLI.Command#getUsage()
 +    */
 +   @Override
 +   public String getUsage() {
 +     return USAGE + ":\n\n" + DESC;
 +   }
 + }
 +
 + public class CertExportCommand extends Command {
 +
 +   public static final String USAGE = "export-cert";
 +   public static final String DESC = "The export-cert command exports the public certificate\n" +
 +                                     "from the a gateway.jks keystore with the alias of gateway-identity.";
 +   private static final String GATEWAY_CREDENTIAL_STORE_NAME = "__gateway";
 +   private static final String GATEWAY_IDENTITY_PASSPHRASE = "gateway-identity-passphrase";
 +
 +    public CertExportCommand() {
 +    }
 +
 +    private GatewayConfig getGatewayConfig() {
 +      GatewayConfig result;
 +      Configuration conf = getConf();
 +      if( conf != null && conf instanceof GatewayConfig ) {
 +        result = (GatewayConfig)conf;
 +      } else {
 +        result = new GatewayConfigImpl();
 +      }
 +      return result;
 +    }
 +
 +    /* (non-Javadoc)
 +     * @see org.apache.knox.gateway.util.KnoxCLI.Command#execute()
 +     */
 +    @Override
 +    public void execute() throws Exception {
 +      KeystoreService ks = getKeystoreService();
 +
 +      AliasService as = getAliasService();
 +
 +      if (ks != null) {
 +        try {
 +          if (!ks.isKeystoreForGatewayAvailable()) {
 +            out.println("No keystore has been created for the gateway. Please use the create-cert command or populate with a CA signed cert of your own.");
 +          }
 +          char[] passphrase = as.getPasswordFromAliasForCluster(GATEWAY_CREDENTIAL_STORE_NAME, GATEWAY_IDENTITY_PASSPHRASE);
 +          if (passphrase == null) {
 +            MasterService ms = services.getService("MasterService");
 +            passphrase = ms.getMasterSecret();
 +          }
 +          Certificate cert = ks.getKeystoreForGateway().getCertificate("gateway-identity");
 +          String keyStoreDir = getGatewayConfig().getGatewaySecurityDir() + File.separator + "keystores" + File.separator;
 +          File ksd = new File(keyStoreDir);
 +          if (!ksd.exists()) {
 +            if( !ksd.mkdirs() ) {
 +              // certainly should not happen if the keystore is known to be available
 +              throw new ServiceLifecycleException("Unable to create keystores directory" + ksd.getAbsolutePath());
 +            }
 +          }
 +          if ("PEM".equals(type) || type == null) {
 +            X509CertificateUtil.writeCertificateToFile(cert, new File(keyStoreDir + "gateway-identity.pem"));
 +            out.println("Certificate gateway-identity has been successfully exported to: " + keyStoreDir + "gateway-identity.pem");
 +          }
 +          else if ("JKS".equals(type)) {
 +            X509CertificateUtil.writeCertificateToJKS(cert, new File(keyStoreDir + "gateway-client-trust.jks"));
 +            out.println("Certificate gateway-identity has been successfully exported to: " + keyStoreDir + "gateway-client-trust.jks");
 +          }
 +          else {
 +            out.println("Invalid type for export file provided. Export has not been done. Please use: [PEM|JKS] default value is PEM.");
 +          }
 +        } catch (KeystoreServiceException e) {
 +          throw new ServiceLifecycleException("Keystore was not loaded properly - the provided (or persisted) master secret may not match the password for the keystore.", e);
 +        }
 +      }
 +    }
 +
 +    /* (non-Javadoc)
 +     * @see org.apache.knox.gateway.util.KnoxCLI.Command#getUsage()
 +     */
 +    @Override
 +    public String getUsage() {
 +      return USAGE + ":\n\n" + DESC;
 +    }
 +  }
 +
 + public class CertCreateCommand extends Command {
 +
 +  public static final String USAGE = "create-cert [--hostname h]";
 +  public static final String DESC = "The create-cert command creates and populates\n" +
 +                                    "a gateway.jks keystore with a self-signed certificate\n" +
 +                                    "to be used as the gateway identity. It also adds an alias\n" +
 +                                    "to the __gateway-credentials.jceks credential store for the\n" +
 +                                    "key passphrase.";
 +  private static final String GATEWAY_CREDENTIAL_STORE_NAME = "__gateway";
 +  private static final String GATEWAY_IDENTITY_PASSPHRASE = "gateway-identity-passphrase";
 +
 +   public CertCreateCommand() {
 +   }
 +
 +   /* (non-Javadoc)
 +    * @see org.apache.knox.gateway.util.KnoxCLI.Command#execute()
 +    */
 +   @Override
 +   public void execute() throws Exception {
 +     KeystoreService ks = getKeystoreService();
 +
 +     AliasService as = getAliasService();
 +
 +     if (ks != null) {
 +       try {
 +         if (!ks.isCredentialStoreForClusterAvailable(GATEWAY_CREDENTIAL_STORE_NAME)) {
 +//           log.creatingCredentialStoreForGateway();
 +           ks.createCredentialStoreForCluster(GATEWAY_CREDENTIAL_STORE_NAME);
 +         }
 +         else {
 +//           log.credentialStoreForGatewayFoundNotCreating();
 +         }
 +         // LET'S NOT GENERATE A DIFFERENT KEY PASSPHRASE BY DEFAULT ANYMORE
 +         // IF A DEPLOYMENT WANTS TO CHANGE THE KEY PASSPHRASE TO MAKE IT MORE SECURE THEN
 +         // THEY CAN ADD THE ALIAS EXPLICITLY WITH THE CLI
 +         //as.generateAliasForCluster(GATEWAY_CREDENTIAL_STORE_NAME, GATEWAY_IDENTITY_PASSPHRASE);
 +       } catch (KeystoreServiceException e) {
 +         throw new ServiceLifecycleException("Keystore was not loaded properly - the provided (or persisted) master secret may not match the password for the keystore.", e);
 +       }
 +
 +       try {
 +         if (!ks.isKeystoreForGatewayAvailable()) {
 +//           log.creatingKeyStoreForGateway();
 +           ks.createKeystoreForGateway();
 +         }
 +         else {
 +//           log.keyStoreForGatewayFoundNotCreating();
 +         }
 +         char[] passphrase = as.getPasswordFromAliasForCluster(GATEWAY_CREDENTIAL_STORE_NAME, GATEWAY_IDENTITY_PASSPHRASE);
 +         if (passphrase == null) {
 +           MasterService ms = services.getService("MasterService");
 +           passphrase = ms.getMasterSecret();
 +         }
 +         ks.addSelfSignedCertForGateway("gateway-identity", passphrase, hostname);
 +//         logAndValidateCertificate();
 +         out.println("Certificate gateway-identity has been successfully created.");
 +       } catch (KeystoreServiceException e) {
 +         throw new ServiceLifecycleException("Keystore was not loaded properly - the provided (or persisted) master secret may not match the password for the keystore.", e);
 +       }
 +     }
 +   }
 +
 +   /* (non-Javadoc)
 +    * @see org.apache.knox.gateway.util.KnoxCLI.Command#getUsage()
 +    */
 +   @Override
 +   public String getUsage() {
 +     return USAGE + ":\n\n" + DESC;
 +   }
 +
 + }
 +
 + public class AliasCreateCommand extends Command {
 +
 +  public static final String USAGE = "create-alias aliasname [--cluster clustername] " +
 +                                     "[ (--value v) | (--generate) ]";
 +  public static final String DESC = "The create-alias command will create an alias\n"
 +                                       + "and secret pair within the credential store for the\n"
 +                                       + "indicated --cluster otherwise within the gateway\n"
 +                                       + "credential store. The actual secret may be specified via\n"
 +                                       + "the --value option or --generate (will create a random secret\n"
 +                                       + "for you) or user will be prompt to provide password.";
 +
 +  private String name = null;
 +
 +  /**
 +    * @param alias
 +    */
 +   public AliasCreateCommand(String alias) {
 +     name = alias;
 +   }
 +
 +   /* (non-Javadoc)
 +    * @see org.apache.knox.gateway.util.KnoxCLI.Command#execute()
 +    */
 +   @Override
 +   public void execute() throws Exception {
 +     AliasService as = getAliasService();
 +     if (cluster == null) {
 +       cluster = "__gateway";
 +     }
 +     if (value != null) {
 +       as.addAliasForCluster(cluster, name, value);
 +       out.println(name + " has been successfully created.");
 +     }
 +     else {
 +       if ("true".equals(generate)) {
 +         as.generateAliasForCluster(cluster, name);
 +         out.println(name + " has been successfully generated.");
 +       }
 +       else {
 +          value = new String(promptUserForPassword());
 +          as.addAliasForCluster(cluster, name, value);
 +          out.println(name + " has been successfully created.");
 +       }
 +     }
 +   }
 +
 +   /* (non-Javadoc)
 +    * @see org.apache.knox.gateway.util.KnoxCLI.Command#getUsage()
 +    */
 +   @Override
 +   public String getUsage() {
 +     return USAGE + ":\n\n" + DESC;
 +   }
 +
 +    protected char[] promptUserForPassword() {
 +      char[] password = null;
 +      Console c = System.console();
 +      if (c == null) {
 +        System.err
 +            .println("No console to fetch password from user.Consider setting via --generate or --value.");
 +        System.exit(1);
 +      }
 +
 +      boolean noMatch;
 +      do {
 +        char[] newPassword1 = c.readPassword("Enter password: ");
 +        char[] newPassword2 = c.readPassword("Enter password again: ");
 +        noMatch = !Arrays.equals(newPassword1, newPassword2);
 +        if (noMatch) {
 +          c.format("Passwords don't match. Try again.%n");
 +        } else {
 +          password = Arrays.copyOf(newPassword1, newPassword1.length);
 +        }
 +        Arrays.fill(newPassword1, ' ');
 +        Arrays.fill(newPassword2, ' ');
 +      } while (noMatch);
 +      return password;
 +    }
 +
 + }
 +
 + /**
 +  *
 +  */
 + public class AliasDeleteCommand extends Command {
 +  public static final String USAGE = "delete-alias aliasname [--cluster clustername]";
 +  public static final String DESC = "The delete-alias command removes the\n" +
 +                                    "indicated alias from the --cluster specific\n" +
 +                                    "credential store or the gateway credential store.";
 +
 +  private String name = null;
 +
 +  /**
 +    * @param alias
 +    */
 +   public AliasDeleteCommand(String alias) {
 +     name = alias;
 +   }
 +
 +   /* (non-Javadoc)
 +    * @see org.apache.knox.gateway.util.KnoxCLI.Command#execute()
 +    */
 +   @Override
 +   public void execute() throws Exception {
 +     AliasService as = getAliasService();
 +      KeystoreService keystoreService = getKeystoreService();
 +     if (as != null) {
 +       if (cluster == null) {
 +         cluster = "__gateway";
 +       }
 +        boolean credentialStoreForClusterAvailable =
 +            keystoreService.isCredentialStoreForClusterAvailable(cluster);
 +        if (credentialStoreForClusterAvailable) {
 +          List<String> aliasesForCluster = as.getAliasesForCluster(cluster);
 +          if (null == aliasesForCluster || !aliasesForCluster.contains(name)) {
 +            out.println("Deletion of Alias: " + name + " from cluster: " + cluster + " Failed. "
 +                + "\n" + "No such alias exists in the cluster.");
 +          } else {
 +            as.removeAliasForCluster(cluster, name);
 +            out.println(name + " has been successfully deleted.");
 +          }
 +        } else {
 +          out.println("Invalid cluster name provided: " + cluster);
 +        }
 +     }
 +   }
 +
 +   /* (non-Javadoc)
 +    * @see org.apache.knox.gateway.util.KnoxCLI.Command#getUsage()
 +    */
 +   @Override
 +   public String getUsage() {
 +     return USAGE + ":\n\n" + DESC;
 +   }
 +
 + }
 +
 + /**
 +  *
 +  */
 + public class MasterCreateCommand extends Command {
 +  public static final String USAGE = "create-master [--force]";
 +  public static final String DESC = "The create-master command persists the\n" +
 +                                    "master secret in a file located at:\n" +
 +                                    "{GATEWAY_HOME}/data/security/master. It\n" +
 +                                    "will prompt the user for the secret to persist.\n" +
 +                                    "Use --force to overwrite the master secret.";
 +
 +   public MasterCreateCommand() {
 +   }
 +
 +   private GatewayConfig getGatewayConfig() {
 +     GatewayConfig result;
 +     Configuration conf = getConf();
 +     if( conf != null && conf instanceof GatewayConfig ) {
 +       result = (GatewayConfig)conf;
 +     } else {
 +       result = new GatewayConfigImpl();
 +     }
 +     return result;
 +   }
 +
 +   public boolean validate() {
 +     boolean valid = true;
 +     GatewayConfig config = getGatewayConfig();
 +     File dir = new File( config.getGatewaySecurityDir() );
 +     File file = new File( dir, "master" );
 +     if( file.exists() ) {
 +       if( force ) {
 +         if( !file.canWrite() ) {
 +           out.println(
 +               "This command requires write permissions on the master secret file: " +
 +                   file.getAbsolutePath() );
 +           valid = false;
 +         } else if( !file.canWrite() ) {
 +           out.println(
 +               "This command requires write permissions on the master secret file: " +
 +                   file.getAbsolutePath() );
 +           valid = false;
 +         } else {
 +           valid = file.delete();
 +           if( !valid ) {
 +             out.println(
 +                 "Unable to delete the master secret file: " +
 +                     file.getAbsolutePath() );
 +           }
 +         }
 +       } else {
 +         out.println(
 +             "Master secret is already present on disk. " +
 +                 "Please be aware that overwriting it will require updating other security artifacts. " +
 +                 " Use --force to overwrite the existing master secret." );
 +         valid = false;
 +       }
 +     } else if( dir.exists() && !dir.canWrite() ) {
 +       out.println(
 +           "This command requires write permissions on the security directory: " +
 +               dir.getAbsolutePath() );
 +       valid = false;
 +     }
 +     return valid;
 +   }
 +
 +   /* (non-Javadoc)
 +    * @see org.apache.knox.gateway.util.KnoxCLI.Command#execute()
 +    */
 +   @Override
 +   public void execute() throws Exception {
 +     out.println("Master secret has been persisted to disk.");
 +   }
 +
 +   /* (non-Javadoc)
 +    * @see org.apache.knox.gateway.util.KnoxCLI.Command#getUsage()
 +    */
 +   @Override
 +   public String getUsage() {
 +     return USAGE + ":\n\n" + DESC;
 +   }
 + }
 +
 +  private class VersionCommand extends Command {
 +
 +    public static final String USAGE = "version";
 +    public static final String DESC = "Displays Knox version information.";
 +
 +    @Override
 +    public void execute() throws Exception {
 +      Properties buildProperties = loadBuildProperties();
 +      System.out.println(
 +          String.format(
 +              "Apache Knox: %s (%s)",
 +              buildProperties.getProperty( "build.version", "unknown" ),
 +              buildProperties.getProperty( "build.hash", "unknown" ) ) );
 +    }
 +
 +    @Override
 +    public String getUsage() {
 +      return USAGE + ":\n\n" + DESC;
 +    }
 +
 +  }
 +
 +  private class RedeployCommand extends Command {
 +
 +    public static final String USAGE = "redeploy [--cluster clustername]";
 +    public static final String DESC =
 +        "Redeploys one or all of the gateway's clusters (a.k.a topologies).";
 +
 +    @Override
 +    public void execute() throws Exception {
 +      TopologyService ts = getTopologyService();
 +      ts.reloadTopologies();
 +      if (cluster != null) {
 +        if (validateClusterName(cluster, ts)) {
 +          ts.redeployTopologies(cluster);
 +        }
 +        else {
 +          out.println("Invalid cluster name provided. Nothing to redeploy.");
 +        }
 +      }
 +    }
 +
 +    /**
 +     * @param cluster
 +     * @param ts
 +     */
 +    private boolean validateClusterName(String cluster, TopologyService ts) {
 +      boolean valid = false;
 +      for (Topology t : ts.getTopologies() ) {
 +        if (t.getName().equals(cluster)) {
 +          valid = true;
 +          break;
 +        }
 +      }
 +      return valid;
 +    }
 +
 +    @Override
 +    public String getUsage() {
 +      return USAGE + ":\n\n" + DESC;
 +    }
 +
 +  }
 +
 +  private class ValidateTopologyCommand extends Command {
 +
 +    public static final String USAGE = "validate-topology [--cluster clustername] | [--path \"path/to/file\"]";
 +    public static final String DESC = "Ensures that a cluster's description (a.k.a topology) \n" +
 +        "follows the correct formatting rules.\n" +
 +        "use the list-topologies command to get a list of available cluster names";
 +    private String file = "";
 +
 +    @Override
 +    public String getUsage() {
 +      return USAGE + ":\n\n" + DESC;
 +    }
 +
 +    public void execute() throws Exception {
 +      GatewayConfig gc = getGatewayConfig();
 +      String topDir = gc.getGatewayTopologyDir();
 +
 +      if(path != null) {
 +        file = path;
 +      } else if(cluster == null) {
 +        // The following block of code retreieves the list of files in the topologies directory
 +        File tops = new File(topDir + "/topologies");
 +        if(tops.isDirectory()) {
 +          out.println("List of files available in the topologies directory");
 +          for (File f : tops.listFiles()) {
 +            if(f.getName().endsWith(".xml")) {
 +              String fName = f.getName().replace(".xml", "");
 +              out.println(fName);
 +            }
 +          }
 +          return;
 +        } else {
 +          out.println("Could not locate topologies directory");
 +          return;
 +        }
 +
 +      } else {
 +        file = topDir + "/" + cluster + ".xml";
 +      }
 +
 +      // The following block checks a topology against the XSD
 +      out.println();
 +      out.println("File to be validated: ");
 +      out.println(file);
 +      out.println("==========================================");
 +
 +      if(new File(file).exists()) {
 +        TopologyValidator tv = new TopologyValidator(file);
 +
 +        if(tv.validateTopology()) {
 +          out.println("Topology file validated successfully");
 +        } else {
 +          out.println(tv.getErrorString()) ;
 +          out.println("Topology validation unsuccessful");
 +        }
 +      } else {
 +        out.println("The topology file specified does not exist.");
 +      }
 +    }
 +
 +  }
 +
 +  private class ListTopologiesCommand extends Command {
 +
 +    public static final String USAGE = "list-topologies";
 +    public static final String DESC = "Retrieves a list of the available topologies within the\n" +
 +        "default topologies directory. Will return topologies that may not be deployed due\n" +
 +        "errors in file formatting.";
 +
 +    @Override
 +    public String getUsage() {
 +      return USAGE + ":\n\n" + DESC;
 +    }
 +
 +    @Override
 +    public void execute() {
 +
 +      String confDir = getGatewayConfig().getGatewayConfDir();
 +      File tops = new File(confDir + "/topologies");
 +      out.println("List of files available in the topologies directory");
 +      out.println(tops.toString());
 +      if(tops.isDirectory()) {
 +        for (File f : tops.listFiles()) {
 +          if(f.getName().endsWith(".xml")) {
 +            String fName = f.getName().replace(".xml", "");
 +            out.println(fName);
 +          }
 +        }
 +        return;
 +      } else {
 +        out.println("ERR: Topologies directory does not exist.");
 +        return;
 +      }
 +
 +    }
 +
 +  }
 +
 +  private class LDAPCommand extends Command {
 +
 +    public static final String USAGE = "ldap-command";
 +    public static final String DESC = "This is an internal command. It should not be used.";
 +    protected String username = null;
 +    protected char[] password = null;
 +    protected static final String debugMessage = "For more information use --d for debug output.";
 +    protected Topology topology;
 +
 +    @Override
 +    public String getUsage() {
 +      return USAGE + ":\n\n" + DESC;
 +    }
 +
 +    @Override
 +    public void execute() {
 +      out.println("This command does not have any functionality.");
 +    }
 +
 +
 +//    First define a few Exceptions
 +    protected class NoSuchTopologyException extends Exception {
 +      public NoSuchTopologyException() {}
 +      public NoSuchTopologyException(String message) { super(message); }
 +    }
 +    protected class MissingPasswordException extends Exception {
 +      public MissingPasswordException() {}
 +      public MissingPasswordException(String message) { super(message); }
 +    }
 +
 +    protected class MissingUsernameException extends Exception {
 +      public MissingUsernameException() {};
 +      public MissingUsernameException(String message) { super(message); }
 +    }
 +
 +    protected class BadSubjectException extends Exception {
 +      public BadSubjectException() {}
 +      public BadSubjectException(String message) { super(message); }
 +    }
 +
 +    protected class NoSuchProviderException extends Exception {
 +      public NoSuchProviderException() {}
 +      public NoSuchProviderException(String name, String role, String topology) {
 +        super("Could not find provider with role: " + role + ", name: " + name + " inside of topology: " + topology);
 +      }
 +    }
 +
 +    //    returns false if any errors are printed
 +    protected boolean hasShiroProviderErrors(Topology topology, boolean groupLookup) {
 +//      First let's define the variables that represent the ShiroProvider params
 +      String mainLdapRealm = "main.ldapRealm";
 +      String contextFactory = mainLdapRealm + ".contextFactory";
 +      String groupContextFactory = "main.ldapGroupContextFactory";
 +      String authorizationEnabled = mainLdapRealm + ".authorizationEnabled";
 +      String userSearchAttributeName = mainLdapRealm + ".userSearchAttributeName";
 +      String userObjectClass = mainLdapRealm + ".userObjectClass";
 +      String authenticationMechanism = mainLdapRealm + ".authenticationMechanism"; // Should not be used up to v0.6.0)
 +      String searchBase = mainLdapRealm + ".searchBase";
 +      String groupSearchBase = mainLdapRealm + ".groupSearchBase";
 +      String userSearchBase = mainLdapRealm + ".userSearchBase";
 +      String groupObjectClass = mainLdapRealm + ".groupObjectClass";
 +      String memberAttribute = mainLdapRealm + ".memberAttribute";
 +      String memberAttributeValueTemplate = mainLdapRealm + ".memberAttributeValueTemplate";
 +      String systemUsername = contextFactory + ".systemUsername";
 +      String systemPassword = contextFactory + ".systemPassword";
 +      String url = contextFactory + ".url";
 +      String userDnTemplate = mainLdapRealm + ".userDnTemplate";
 +
 +
 +      Provider shiro = topology.getProvider("authentication", "ShiroProvider");
 +      if(shiro != null) {
 +        Map<String, String> params = shiro.getParams();
 +        int errs = 0;
 +        if(groupLookup) {
 +          int errors = 0;
 +          errors += hasParam(params, groupContextFactory, true) ? 0 : 1;
 +          errors += hasParam(params, groupObjectClass, true) ? 0 : 1;
 +          errors += hasParam(params, memberAttributeValueTemplate, true) ? 0 : 1;
 +          errors += hasParam(params, memberAttribute, true) ? 0 : 1;
 +          errors += hasParam(params, authorizationEnabled, true) ? 0 : 1;
 +          errors += hasParam(params, systemUsername, true) ? 0 : 1;
 +          errors += hasParam(params, systemPassword, true) ? 0 : 1;
 +          errors += hasParam(params, userSearchBase, true) ? 0 : 1;
 +          errors += hasParam(params, groupSearchBase, true) ? 0 : 1;
 +          errs += errors;
 +
 +        } else {
 +
 +//        Realm + Url is always required.
 +          errs += hasParam(params, mainLdapRealm, true) ? 0 : 1;
 +          errs += hasParam(params, url, true) ? 0 : 1;
 +
 +          if(hasParam(params, authorizationEnabled, false)) {
 +            int errors = 0;
 +            int searchBaseErrors = 0;
 +            errors += hasParam(params, systemUsername, true) ? 0 : 1;
 +            errors += hasParam(params, systemPassword, true) ? 0 : 1;
 +            searchBaseErrors += hasParam(params, searchBase, false) ? 0 : hasParam(params, userSearchBase, false) ? 0 : 1;
 +            if (searchBaseErrors > 0) {
 +              out.println("Warn: Both " + searchBase + " and " + userSearchBase + " are missing from the topology");
 +            }
 +            errors += searchBaseErrors;
 +            errs += errors;
 +          }
 +
 +//        If any one of these is present they must all be present
 +          if( hasParam(params, userSearchAttributeName, false) ||
 +              hasParam(params, userObjectClass, false) ||
 +              hasParam(params, searchBase, false) ||
 +              hasParam(params, userSearchBase, false)) {
 +
 +            int errors = 0;
 +            errors += hasParam(params, userSearchAttributeName, true) ? 0 : 1;
 +            errors += hasParam(params, userObjectClass, true) ? 0 : 1;
 +            errors += hasParam(params, searchBase, false) ? 0 : hasParam(params, userSearchBase, false) ? 0 : 1;
 +            errors += hasParam(params, systemUsername, true) ? 0 : 1;
 +            errors += hasParam(params, systemPassword, true) ? 0 : 1;
 +
 +            if(errors > 0) {
 +              out.println(userSearchAttributeName + " or " + userObjectClass + " or " + searchBase + " or " + userSearchBase + " was found in the topology");
 +              out.println("If any one of the above params is present then " + userSearchAttributeName + 
 +                  " and " + userObjectClass + " must both be present and either " + searchBase + " or " + userSearchBase + " must also be present.");
 +            }
 +            errs += errors;
 +          } else {
 +            errs += hasParam(params, userDnTemplate, true) ?  0 : 1;
 +
 +          }
 +        }
 +        return (errs > 0);
 +      } else {
 +        out.println("Could not obtain ShiroProvider");
 +        return true;
 +      }
 +    }
 +
 +    // Checks to see if the param name is present. If not, notify the user
 +    protected boolean hasParam(Map<String, String> params, String key, boolean notifyUser){
 +      if(params.get(key) == null){
 +        if(notifyUser) { out.println("Warn: " + key + " is not present in topology"); }
 +        return false;
 +      } else { return true; }
 +    }
 +
 +    /**
 +     *
 +     * @param ini - the path to the shiro.ini file within a topology deployment.
 +     * @param token - token for username and password
 +     * @return - true/false whether a user was successfully able to authenticate or not.
 +     */
 +    protected boolean authenticateUser(Ini ini, UsernamePasswordToken token){
 +      boolean result = false;
 +      try {
 +        Subject subject = getSubject(ini);
 +        try{
 +          subject.login(token);
 +          if(subject.isAuthenticated()){
 +            result = true;
 +          }
 +        } catch (AuthenticationException e){
 +          out.println(e.toString());
 +          out.println(e.getCause().getMessage());
 +          if (debug) {
 +            e.printStackTrace(out);
 +          } else {
 +            out.println(debugMessage);
 +          }
 +        } finally {
 +          subject.logout();
 +        }
 +      } catch (BadSubjectException e) {
 +        out.println(e.toString());
 +        if (debug){
 +          e.printStackTrace();
 +        } else {
 +          out.println(debugMessage);
 +        }
 +      } catch (ConfigurationException e) {
 +        out.println(e.toString());
 +      } catch ( Exception e ) {
 +        out.println(e.getCause());
 +        out.println(e.toString());
 +      }
 +      return result;
 +    }
 +
 +    protected boolean authenticateUser(String config, UsernamePasswordToken token) throws ConfigurationException {
 +      Ini ini = new Ini();
 +      try {
 +        ini.loadFromPath(config);
 +        return authenticateUser(ini, token);
 +      } catch (ConfigurationException e) {
 +        throw e;
 +      }
 +    }
 +
 +    /**
 +     *
 +     * @param userDn - fully qualified userDn used for LDAP authentication
 +     * @return - returns the principal found in the userDn after "uid="
 +     */
 +    protected String getPrincipal(String userDn){
 +      String result = "";
 +
 +//      Need to determine whether we are using AD or LDAP?
 +//      LDAP userDn usually starts with "uid="
 +//      AD userDn usually starts with cn/CN
 +//      Find the userDN template
 +
 +      try {
 +        Topology t = getTopology(cluster);
 +        Provider shiro = t.getProvider("authentication", "ShiroProvider");
 +
 +        String p1 = shiro.getParams().get("main.ldapRealm.userDnTemplate");
 +
 +//        We know everything between first "=" and "," will be part of the principal.
 +        int eq = userDn.indexOf("=");
 +        int com = userDn.indexOf(",");
 +        if(eq != -1 && com > eq && com != -1) {
 +          result = userDn.substring(eq + 1, com);
 +        } else {
 +          result = "";
 +        }
 +      } catch (NoSuchTopologyException e) {
 +        out.println(e.toString());
 +        result = userDn;
 +      } finally {
 +        return result;
 +      }
 +    }
 +
 +    /**
 +     *
 +     * @param t - topology configuration to use
 +     * @param config - the path to the shiro.ini file from the topology deployment.
 +     * @return - true/false whether LDAP successfully authenticated with system credentials.
 +     */
 +    protected boolean testSysBind(Topology t, String config) {
 +      boolean result = false;
 +      String username;
 +      char[] password;
 +
 +      try {
 +//        Pull out contextFactory.url param for light shiro config
 +        Provider shiro = t.getProvider("authentication", "ShiroProvider");
 +        Map<String, String> params = shiro.getParams();
 +        String url = params.get("main.ldapRealm.contextFactory.url");
 +
 +//        Build the Ini with minimum requirements
 +        Ini ini = new Ini();
 +        ini.addSection("main");
 +        ini.setSectionProperty("main", "ldapRealm", "org.apache.knox.gateway.shirorealm.KnoxLdapRealm");
 +        ini.setSectionProperty("main", "ldapContextFactory", "org.apache.knox.gateway.shirorealm.KnoxLdapContextFactory");
 +        ini.setSectionProperty("main", "ldapRealm.contextFactory.url", url);
 +
 +        username = getSystemUsername(t);
 +        password = getSystemPassword(t);
 +        result = authenticateUser(ini, new UsernamePasswordToken(username, password));
 +      } catch (MissingUsernameException | NoSuchProviderException | MissingPasswordException e) {
 +        out.println(e.toString());
 +      } catch (NullPointerException e) {
 +        out.println(e.toString());
 +      }
 +      return result;
 +    }
 +
 +    /**
 +     *
 +     * @param t - topology configuration to use
 +     * @return - the principal of the systemUsername specified in topology. null if non-existent
 +     */
 +    private String getSystemUsername(Topology t) throws MissingUsernameException, NoSuchProviderException {
 +      final String SYSTEM_USERNAME = "main.ldapRealm.contextFactory.systemUsername";
 +      String user = null;
 +      Provider shiroProvider = t.getProvider("authentication", "ShiroProvider");
 +      if(shiroProvider != null){
 +        Map<String, String> params = shiroProvider.getParams();
 +        String userDn = params.get(SYSTEM_USERNAME);
 +        user = userDn;
 +      } else {
 +        throw new NoSuchProviderException("ShiroProvider", "authentication", t.getName());
 +      }
 +      return user;
 +    }
 +
 +    /**
 +     *
 +     * @param t - topology configuration to use
 +     * @return - the systemPassword specified in topology. null if non-existent
 +     */
 +    private char[] getSystemPassword(Topology t) throws NoSuchProviderException, MissingPasswordException{
 +      final String SYSTEM_PASSWORD = "main.ldapRealm.contextFactory.systemPassword";
 +      String pass = null;
 +      Provider shiro = t.getProvider("authentication", "ShiroProvider");
 +      if(shiro != null){
 +        Map<String, String> params = shiro.getParams();
 +        pass = params.get(SYSTEM_PASSWORD);
 +      } else {
 +        throw new NoSuchProviderException("ShiroProvider", "authentication", t.getName());
 +      }
 +
 +      if(pass != null) {
 +        return pass.toCharArray();
 +      } else {
 +        throw new MissingPasswordException("ShiroProvider did not contain param: " + SYSTEM_PASSWORD);
 +      }
 +    }
 +
 +    /**
 +     *
 +     * @param config - the shiro.ini config file created in topology deployment.
 +     * @return returns the Subject given by the shiro config's settings.
 +     */
 +    protected Subject getSubject(Ini config) throws BadSubjectException {
 +      try {
 +        ThreadContext.unbindSubject();
 +        Factory factory = new IniSecurityManagerFactory(config);
 +        org.apache.shiro.mgt.SecurityManager securityManager = (org.apache.shiro.mgt.SecurityManager) factory.getInstance();
 +        SecurityUtils.setSecurityManager(securityManager);
 +        Subject subject = SecurityUtils.getSubject();
 +        if( subject != null) {
 +          return subject;
 +        } else {
 +          out.println("Error Creating Subject from config at: " + config);
 +        }
 +      } catch (Exception e){
 +        out.println(e.toString());
 +      }
 +      throw new BadSubjectException("Subject could not be created with Shiro Config at " + config);
 +    }
 +
 +    protected Subject getSubject(String config) throws ConfigurationException {
 +      Ini ini = new Ini();
 +      ini.loadFromPath(config);
 +      try {
 +        return getSubject(ini);
 +      } catch (BadSubjectException e) {
 +        throw new ConfigurationException("Could not get Subject with Ini at " + config);
 +      }
 +    }
 +
 +    /**
 +     * prompts the user for credentials in the command line if necessary
 +     * populates the username and password members.
 +     */
 +    protected void promptCredentials() {
 +      if(this.username == null){
 +        Console c = System.console();
 +        if( c != null) {
 +          this.username = c.readLine("Username: ");
 +        }else{
 +          try {
 +            BufferedReader reader = new BufferedReader(new InputStreamReader(System.in));
 +            out.println("Username: ");
 +            this.username = reader.readLine();
 +            reader.close();
 +          } catch (IOException e){
 +            out.println(e.toString());
 +            this.username = "";
 +          }
 +        }
 +      }
 +
 +      if(this.password == null){
 +        Console c = System.console();
 +        if( c != null) {
 +          this.password = c.readPassword("Password: ");
 +        }else{
 +          try {
 +            BufferedReader reader = new BufferedReader(new InputStreamReader(System.in));
 +            out.println("Password: ");
 +            String pw = reader.readLine();
 +            if(pw != null){
 +              this.password = pw.toCharArray();
 +            } else {
 +              this.password = new char[0];
 +            }
 +            reader.close();
 +          } catch (IOException e){
 +            out.println(e.toString());
 +            this.password = new char[0];
 +          }
 +        }
 +      }
 +    }
 +
 +    /**
 +     *
 +     * @param topologyName - the name of the topology to retrieve
 +     * @return - Topology object with specified name. null if topology doesn't exist in TopologyService
 +     */
 +    protected Topology getTopology(String topologyName) throws NoSuchTopologyException {
 +      TopologyService ts = getTopologyService();
 +      ts.reloadTopologies();
 +      for (Topology t : ts.getTopologies()) {
 +        if(t.getName().equals(topologyName)) {
 +          return t;
 +        }
 +      }
 +      throw new  NoSuchTopologyException("Topology " + topologyName + " does not" +
 +          " exist in the topologies directory.");
 +    }
 +
 +    /**
 +     *
 +     * @param t - Topology to use for config
 +     * @return - path of shiro.ini config file.
 +     */
 +    protected String getConfig(Topology t){
 +      File tmpDir = new File(System.getProperty("java.io.tmpdir"));
 +      DeploymentFactory.setGatewayServices(services);
 +      EnterpriseArchive archive = DeploymentFactory.createDeployment(getGatewayConfig(), t);
 +      File war = archive.as(ExplodedExporter.class).exportExploded(tmpDir, t.getName() + "_deploy.tmp");
 +      war.deleteOnExit();
 +      String config = war.getAbsolutePath() + "/%2F/WEB-INF/shiro.ini";
 +      try{
 +        FileUtils.forceDeleteOnExit(war);
 +      } catch (IOException e) {
 +        out.println(e.toString());
 +        war.deleteOnExit();
 +      }
 +      return config;
 +    }
 +
 +    /**
 +     * populates username and password if they were passed as arguments, if not will prompt user for them.
 +     */
 +    void acquireCredentials(){
 +      if(user != null){
 +        this.username = user;
 +      }
 +      if(pass != null){
 +        this.password = pass.toCharArray();
 +      }
 +      promptCredentials();
 +    }
 +
 +    /**
 +     *
 +     * @return - true or false if the topology was acquired from the topology service and populated in the topology
 +     * field.
 +     */
 +    protected boolean acquireTopology(){
 +      try {
 +        topology = getTopology(cluster);
 +      } catch (NoSuchTopologyException e) {
 +        out.println(e.toString());
 +        return false;
 +      }
 +      return true;
 +    }
 +  }
 +
 +  private class LDAPAuthCommand extends LDAPCommand {
 +
 +    public static final String USAGE = "user-auth-test [--cluster clustername] [--u username] [--p password] [--g]";
 +    public static final String DESC = "This command tests a cluster's configuration ability to\n " +
 +        "authenticate a user with a cluster's ShiroProvider settings.\n Use \"--g\" if you want to list the groups a" +
 +        " user is a member of. \nOptional: [--u username]: Provide a username argument to the command\n" +
 +        "Optional: [--p password]: Provide a password argument to the command.\n" +
 +        "If a username and password argument are not supplied, the terminal will prompt you for one.";
 +
 +    private static final String  SUBJECT_USER_GROUPS = "subject.userGroups";
 +    private HashSet<String> groupSet = new HashSet<>();
 +
 +    @Override
 +    public String getUsage() {
 +      return USAGE + ":\n\n" + DESC;
 +    }
 +
 +    @Override
 +    public void execute() {
 +      if(!acquireTopology()){
 +        return;
 +      }
 +      acquireCredentials();
 +
 +      if(topology.getProvider("authentication", "ShiroProvider") == null) {
 +        out.println("ERR: This tool currently only works with Shiro as the authentication provider.");
 +        out.println("Please update the topology to use \"ShiroProvider\" as the authentication provider.");
 +        return;
 +      }
 +
 +      String config = getConfig(topology);
 +
 +      if(new File(config).exists()) {
 +          if(authenticateUser(config, new UsernamePasswordToken(username, password))) {
 +            out.println("LDAP authentication successful!");
 +            if(groups) {
 +              if(testSysBind(topology, config)) {
 +                groupSet = getGroups(topology, new UsernamePasswordToken(username, password));
 +                if(groupSet == null || groupSet.isEmpty()) {
 +                  out.println(username + " does not belong to any groups");
 +                  if(groups) {
 +                    hasShiroProviderErrors(topology, true);
 +                    out.println("You were looking for this user's groups but this user does not belong to any.");
 +                    out.println("Your topology file may be incorrectly configured for group lookup.");
 +                  }
 +                } else {
 +                  for (Object o : groupSet.toArray()) {
 +                    out.println(username + " is a member of: " + o.toString());
 +                  }
 +                }
 +              }
 +            }
 +          } else {
 +            out.println("ERR: Unable to authenticate user: " + username);
 +          }
 +      } else {
 +        out.println("ERR: No shiro config file found.");
 +      }
 +    }
 +
 +    private HashSet<String> getGroups(Topology t, UsernamePasswordToken token){
 +      HashSet<String> groups = null;
 +      try {
 +        Subject subject = getSubject(getConfig(t));
 +        if(!subject.isAuthenticated()) {
 +          subject.login(token);
 +        }
 +        subject.hasRole(""); //Populate subject groups
 +        groups = (HashSet) subject.getSession().getAttribute(SUBJECT_USER_GROUPS);
 +        subject.logout();
 +      } catch (AuthenticationException e) {
 +        out.println("Error retrieving groups");
 +        out.println(e.toString());
 +        if(debug) {
 +          e.printStackTrace();
 +        } else {
 +          out.println(debugMessage);
 +        }
 +      } catch (ConfigurationException e) {
 +        out.println(e.toString());
 +        if(debug){
 +          e.printStackTrace();
 +        }
 +      }
 +      return groups;
 +    }
 +
 +  }
 +
 +  public class LDAPSysBindCommand extends LDAPCommand {
 +
 +    public static final String USAGE = "system-user-auth-test [--cluster clustername] [--d]";
 +    public static final String DESC = "This command tests a cluster configuration's ability to\n " +
 +        "authenticate a user with a cluster's ShiroProvider settings.";
 +
 +    @Override
 +    public String getUsage() {
 +      return USAGE + ":\n\n" + DESC;
 +    }
 +
 +    @Override
 +    public void execute() {
 +
 +      if(!acquireTopology()) {
 +        return;
 +      }
 +
 +      if(hasShiroProviderErrors(topology, false)) {
 +        out.println("Topology warnings present. SystemUser may not bind.");
 +      }
 +
 +      if(testSysBind(topology, getConfig(topology))) {
 +        out.println("System LDAP Bind successful.");
 +      } else {
 +        out.println("Unable to successfully bind to LDAP server with topology credentials. Are your parameters correct?");
 +      }
 +    }
 +  }
 +
 +  private GatewayConfig getGatewayConfig() {
 +    GatewayConfig result;
 +    Configuration conf = getConf();
 +    if(conf != null && conf instanceof GatewayConfig) {
 +      result = (GatewayConfig) conf;
 +    } else {
 +      result = new GatewayConfigImpl();
 +    }
 +    return result;
 +  }
 +
 +  public class ServiceTestCommand extends Command {
 +    public static final String USAGE = "service-test [--u username] [--p password] [--cluster clustername] [--hostname name] " +
 +        "[--port port]";
 +    public static final String DESC =
 +                        "This command requires a running instance of Knox to be present on the same machine.\n" +
 +                        "It will execute a test to make sure all services are accessible through the gateway URLs.\n" +
 +                        "Errors are reported and suggestions to resolve any problems are returned. JSON formatted.\n";
 +
 +    private boolean ssl = true;
 +    private int attempts = 0;
 +
 +    @Override
 +    public String getUsage() { return USAGE + ":\n\n" + DESC; };
 +
 +    @Override
 +    public void execute() {
 +      attempts++;
 +      SSLContext ctx = null;
 +      CloseableHttpClient client;
 +      String http = "http://";
 +      String https = "https://";
 +      GatewayConfig conf = getGatewayConfig();
 +      String gatewayPort;
 +      String host;
 +
 +
 +      if(cluster == null) {
 +        printKnoxShellUsage();
 +        out.println("A --cluster argument is required.");
 +        return;
 +      }
 +
 +      if(hostname != null) {
 +        host = hostname;
 +      } else {
 +        try {
 +          host = InetAddress.getLocalHost().getHostAddress();
 +        } catch (UnknownHostException e) {
 +          out.println(e.toString());
 +          out.println("Defaulting address to localhost. Use --hostname option to specify a different hostname");
 +          host = "localhost";
 +        }
 +      }
 +
 +      if (port != null) {
 +        gatewayPort = port;
 +      } else if (conf.getGatewayPort() > -1) {
 +        gatewayPort = Integer.toString(conf.getGatewayPort());
 +      } else {
 +        out.println("Could not get port. Please supply it using the --port option");
 +        return;
 +      }
 +
 +
 +      String path = "/" + conf.getGatewayPath();
 +      String topology = "/" + cluster;
 +      String httpServiceTestURL = http + host + ":" + gatewayPort + path + topology + "/service-test";
 +      String httpsServiceTestURL = https + host + ":" + gatewayPort + path + topology + "/service-test";
 +
 +      String authString = "";
 +//    Create Authorization String
 +      if( user != null && pass != null) {
 +        authString = "Basic " + Base64.encodeBase64String((user + ":" + pass).getBytes());
 +      } else {
 +        out.println("Username and/or password not supplied. Expect HTTP 401 Unauthorized responses.");
 +      }
 +
 +//    Attempt to build SSL context for HTTP client.
 +      try {
 +        ctx = SSLContexts.custom().loadTrustMaterial(null, new TrustSelfSignedStrategy()).build();
 +      } catch (Exception e) {
 +        out.println(e.toString());
 +      }
 +
 +//    Initialize the HTTP client
 +      if(ctx == null) {
 +        client = HttpClients.createDefault();
 +      } else {
 +        client = HttpClients.custom().setSslcontext(ctx).build();
 +      }
 +
 +      HttpGet request;
 +      if(ssl) {
 +        request = new HttpGet(httpsServiceTestURL);
 +      } else {
 +        request = new HttpGet(httpServiceTestURL);
 +      }
 +
 +
 +      request.setHeader("Authorization", authString);
 +      request.setHeader("Accept", MediaType.APPLICATION_JSON.getMediaType());
 +      try {
 +        out.println(request.toString());
 +        CloseableHttpResponse response = client.execute(request);
 +
 +        switch (response.getStatusLine().getStatusCode()) {
 +
 +          case 200:
 +            response.getEntity().writeTo(out);
 +            break;
 +          case 404:
 +            out.println("Could not find service-test resource");
 +            out.println("Make sure you have configured the SERVICE-TEST service in your topology.");
 +            break;
 +          case 500:
 +            out.println("HTTP 500 Server error");
 +            break;
 +
 +          default:
 +            out.println("Unexpected HTTP response code.");
 +            out.println(response.getStatusLine().toString());
 +            response.getEntity().writeTo(out);
 +            break;
 +        }
 +
 +        response.close();
 +        request.releaseConnection();
 +
 +      } catch (ClientProtocolException e) {
 +        out.println(e.toString());
 +        if (debug) {
 +          e.printStackTrace(out);
 +        }
 +      } catch (SSLException e) {
 +        out.println(e.toString());
 +        retryRequest();
 +      } catch (IOException e) {
 +        out.println(e.toString());
 +        retryRequest();
 +        if(debug) {
 +          e.printStackTrace(out);
 +        }
 +      } finally {
 +        try {
 +          client.close();
 +        } catch (IOException e) {
 +          out.println(e.toString());
 +        }
 +      }
 +
 +    }
 +
 +    public void retryRequest(){
 +      if(attempts < 2) {
 +        if(ssl) {
 +          ssl = false;
 +          out.println("Attempting request without SSL.");
 +        } else {
 +          ssl = true;
 +          out.println("Attempting request with SSL ");
 +        }
 +        execute();
 +      } else {
 +        out.println("Unable to successfully make request. Try using the API with cURL.");
 +      }
 +    }
 +
 +  }
 +
 +  public class RemoteRegistryClientsListCommand extends Command {
 +
 +    static final String USAGE = "list-registry-clients";
 +    static final String DESC = "Lists all of the remote configuration registry clients defined in gateway-site.xml.\n";
 +
 +    /* (non-Javadoc)
 +     * @see org.apache.knox.gateway.util.KnoxCLI.Command#execute()
 +     */
 +    @Override
 +    public void execute() throws Exception {
 +      GatewayConfig config = getGatewayConfig();
 +      List<String> remoteConfigRegistryClientNames = config.getRemoteRegistryConfigurationNames();
 +      if (!remoteConfigRegistryClientNames.isEmpty()) {
 +        out.println("Listing remote configuration registry clients:");
 +        for (String name : remoteConfigRegistryClientNames) {
 +          out.println(name);
 +        }
 +      }
 +    }
 +
 +    /* (non-Javadoc)
 +     * @see org.apache.knox.gateway.util.KnoxCLI.Command#getUsage()
 +     */
 +    @Override
 +    public String getUsage() {
 +      return USAGE + ":\n\n" + DESC;
 +    }
 + }
 +
++  private abstract class RemoteRegistryCommand extends Command {
++    static final String ROOT_ENTRY = "/knox";
++    static final String CONFIG_ENTRY = ROOT_ENTRY + "/config";
++    static final String PROVIDER_CONFIG_ENTRY = CONFIG_ENTRY + "/shared-providers";
++    static final String DESCRIPTORS_ENTRY = CONFIG_ENTRY + "/descriptors";
++
++    protected RemoteConfigurationRegistryClient getClient() {
++      RemoteConfigurationRegistryClient client = null;
++      if (remoteRegistryClient != null) {
++        RemoteConfigurationRegistryClientService cs = getRemoteConfigRegistryClientService();
++        client = cs.get(remoteRegistryClient);
++        if (client == null) {
++          out.println("No remote configuration registry identified by '" + remoteRegistryClient + "' could be found.");
++        }
++      } else {
++        out.println("Missing required argument : --registry-client\n");
++      }
++      return client;
++    }
++  }
++
++
++  public class RemoteRegistryListProviderConfigsCommand extends RemoteRegistryCommand {
++    static final String USAGE = "list-provider-configs --registry-client name";
++    static final String DESC = "Lists the provider configurations present in the specified remote registry\n";
++
++    @Override
++    public void execute() {
++      RemoteConfigurationRegistryClient client = getClient();
++      if (client != null) {
++        out.println("Provider Configurations (@" + client.getAddress() + ")");
++        List<String> entries = client.listChildEntries(PROVIDER_CONFIG_ENTRY);
++        for (String entry : entries) {
++          out.println(entry);
++        }
++        out.println();
++      }
++    }
++
++    @Override
++    public String getUsage() {
++      return USAGE + ":\n\n" + DESC;
++    }
++  }
++
++
++  public class RemoteRegistryListDescriptorsCommand extends RemoteRegistryCommand {
++    static final String USAGE = "list-descriptors --registry-client name";
++    static final String DESC = "Lists the descriptors present in the specified remote registry\n";
++
++    @Override
++    public void execute() {
++      RemoteConfigurationRegistryClient client = getClient();
++      if (client != null) {
++        out.println("Descriptors (@" + client.getAddress() + ")");
++        List<String> entries = client.listChildEntries(DESCRIPTORS_ENTRY);
++        for (String entry : entries) {
++          out.println(entry);
++        }
++        out.println();
++      }
++    }
++
++    @Override
++    public String getUsage() {
++      return USAGE + ":\n\n" + DESC;
++    }
++  }
++
 +
 +  /**
 +   * Base class for remote config registry upload commands
 +   */
-   public abstract class RemoteRegistryUploadCommand extends Command {
-     protected static final String ROOT_ENTRY = "/knox";
-     protected static final String CONFIG_ENTRY = ROOT_ENTRY + "/config";
-     protected static final String PROVIDER_CONFIG_ENTRY = CONFIG_ENTRY + "/shared-providers";
-     protected static final String DESCRIPTORS__ENTRY = CONFIG_ENTRY + "/descriptors";
- 
++  public abstract class RemoteRegistryUploadCommand extends RemoteRegistryCommand {
 +    private File sourceFile = null;
 +    protected String filename = null;
 +
 +    protected RemoteRegistryUploadCommand(String sourceFileName) {
 +      this.filename = sourceFileName;
 +    }
 +
 +    private void upload(RemoteConfigurationRegistryClient client, String entryPath, File source) throws Exception {
 +      String content = FileUtils.readFileToString(source);
 +      if (client.entryExists(entryPath)) {
 +        // If it exists, then we're going to set the data
 +        client.setEntryData(entryPath, content);
 +      } else {
 +        // If it does not exist, then create it and set the data
 +        client.createEntry(entryPath, content);
 +      }
 +    }
 +
 +    File getSourceFile() {
 +      if (sourceFile == null) {
 +        sourceFile = new File(filename);
 +      }
 +      return sourceFile;
 +    }
 +
 +    String getEntryName(String prefixPath) {
 +      String entryName = remoteRegistryEntryName;
 +      if (entryName == null) {
 +        File sourceFile = getSourceFile();
 +        if (sourceFile.exists()) {
 +          String path = sourceFile.getAbsolutePath();
 +          entryName = path.substring(path.lastIndexOf(File.separator) + 1);
 +        } else {
 +          out.println("Could not locate source file: " + filename);
 +        }
 +      }
 +      return prefixPath + "/" + entryName;
 +    }
 +
 +    protected void execute(String entryName, File sourceFile) throws Exception {
-       if (remoteRegistryClient != null) {
-         RemoteConfigurationRegistryClientService cs = getRemoteConfigRegistryClientService();
-         RemoteConfigurationRegistryClient client = cs.get(remoteRegistryClient);
-         if (client != null) {
-           if (entryName != null) {
-             upload(client, entryName, sourceFile);
-           }
-         } else {
-           out.println("No remote configuration registry identified by '" + remoteRegistryClient + "' could be found.");
++      RemoteConfigurationRegistryClient client = getClient();
++      if (client != null) {
++        if (entryName != null) {
++          upload(client, entryName, sourceFile);
 +        }
-       } else {
-         out.println("Missing required argument : --registry-client\n");
 +      }
 +    }
- 
 +  }
 +
 +
 +  public class RemoteRegistryUploadProviderConfigCommand extends RemoteRegistryUploadCommand {
 +
 +    static final String USAGE = "upload-provider-config providerConfigFile --registry-client name [--entry-name entryName]";
 +    static final String DESC = "Uploads a provider configuration to the specified remote registry client, optionally " +
 +                               "renaming the entry.\nIf the entry name is not specified, the name of the uploaded " +
 +                               "file is used.\n";
 +
 +    RemoteRegistryUploadProviderConfigCommand(String fileName) {
 +      super(fileName);
 +    }
 +
 +    /* (non-Javadoc)
 +     * @see org.apache.knox.gateway.util.KnoxCLI.Command#execute()
 +     */
 +    @Override
 +    public void execute() throws Exception {
 +      super.execute(getEntryName(PROVIDER_CONFIG_ENTRY), getSourceFile());
 +    }
 +
 +    /* (non-Javadoc)
 +     * @see org.apache.knox.gateway.util.KnoxCLI.Command#getUsage()
 +     */
 +    @Override
 +    public String getUsage() {
 +      return USAGE + ":\n\n" + DESC;
 +    }
 +  }
 +
 +
 +  public class RemoteRegistryUploadDescriptorCommand extends RemoteRegistryUploadCommand {
 +
 +    static final String USAGE = "upload-descriptor descriptorFile --registry-client name [--entry-name entryName]";
 +    static final String DESC = "Uploads a simple descriptor using the specified remote registry client, optionally " +
 +                               "renaming the entry.\nIf the entry name is not specified, the name of the uploaded " +
 +                               "file is used.\n";
 +
 +    RemoteRegistryUploadDescriptorCommand(String fileName) {
 +      super(fileName);
 +    }
 +
 +    /* (non-Javadoc)
 +     * @see org.apache.knox.gateway.util.KnoxCLI.Command#execute()
 +     */
 +    @Override
 +    public void execute() throws Exception {
-       super.execute(getEntryName(DESCRIPTORS__ENTRY), getSourceFile());
++      super.execute(getEntryName(DESCRIPTORS_ENTRY), getSourceFile());
 +    }
 +
 +    /* (non-Javadoc)
 +     * @see org.apache.knox.gateway.util.KnoxCLI.Command#getUsage()
 +     */
 +    @Override
 +    public String getUsage() {
 +      return USAGE + ":\n\n" + DESC;
 +    }
 +  }
 +
 +
-   public class RemoteRegistryGetACLCommand extends Command {
++  public class RemoteRegistryGetACLCommand extends RemoteRegistryCommand {
 +
 +    static final String USAGE = "get-registry-acl entry --registry-client name";
 +    static final String DESC = "Presents the ACL settings for the specified remote registry entry.\n";
 +
 +    private String entry = null;
 +
 +    RemoteRegistryGetACLCommand(String entry) {
 +      this.entry = entry;
 +    }
 +
 +    /* (non-Javadoc)
 +     * @see org.apache.knox.gateway.util.KnoxCLI.Command#execute()
 +     */
 +    @Override
 +    public void execute() throws Exception {
-       if (remoteRegistryClient != null) {
-         RemoteConfigurationRegistryClientService cs = getRemoteConfigRegistryClientService();
-         RemoteConfigurationRegistryClient client = cs.get(remoteRegistryClient);
-         if (client != null) {
-           if (entry != null) {
-             List<RemoteConfigurationRegistryClient.EntryACL> acls = client.getACL(entry);
-             for (RemoteConfigurationRegistryClient.EntryACL acl : acls) {
-               out.println(acl.getType() + ":" + acl.getId() + ":" + acl.getPermissions());
-             }
++      RemoteConfigurationRegistryClient client = getClient();
++      if (client != null) {
++        if (entry != null) {
++          List<RemoteConfigurationRegistryClient.EntryACL> acls = client.getACL(entry);
++          for (RemoteConfigurationRegistryClient.EntryACL acl : acls) {
++            out.println(acl.getType() + ":" + acl.getId() + ":" + acl.getPermissions());
 +          }
-         } else {
-           out.println("No remote configuration registry identified by '" + remoteRegistryClient + "' could be found.");
 +        }
-       } else {
-         out.println("Missing required argument : --registry-client\n");
 +      }
 +    }
 +
 +    /* (non-Javadoc)
 +     * @see org.apache.knox.gateway.util.KnoxCLI.Command#getUsage()
 +     */
 +    @Override
 +    public String getUsage() {
 +      return USAGE + ":\n\n" + DESC;
 +    }
 +  }
 +
 +
 +  /**
 +   * Base class for remote config registry delete commands
 +   */
-   public abstract class RemoteRegistryDeleteCommand extends Command {
-     protected static final String ROOT_ENTRY = "/knox";
-     protected static final String CONFIG_ENTRY = ROOT_ENTRY + "/config";
-     protected static final String PROVIDER_CONFIG_ENTRY = CONFIG_ENTRY + "/shared-providers";
-     protected static final String DESCRIPTORS__ENTRY = CONFIG_ENTRY + "/descriptors";
- 
++  public abstract class RemoteRegistryDeleteCommand extends RemoteRegistryCommand {
 +    protected String entryName = null;
 +
 +    protected RemoteRegistryDeleteCommand(String entryName) {
 +      this.entryName = entryName;
 +    }
 +
 +    private void delete(RemoteConfigurationRegistryClient client, String entryPath) throws Exception {
 +      if (client.entryExists(entryPath)) {
 +        // If it exists, then delete it
 +        client.deleteEntry(entryPath);
 +      }
 +    }
 +
 +    protected void execute(String entryName) throws Exception {
-       if (remoteRegistryClient != null) {
-         RemoteConfigurationRegistryClientService cs = getRemoteConfigRegistryClientService();
-         RemoteConfigurationRegistryClient client = cs.get(remoteRegistryClient);
-         if (client != null) {
-           if (entryName != null) {
-             delete(client, entryName);
-           }
-         } else {
-           out.println("No remote configuration registry identified by '" + remoteRegistryClient + "' could be found.");
++      RemoteConfigurationRegistryClient client = getClient();
++      if (client != null) {
++        if (entryName != null) {
++          delete(client, entryName);
 +        }
-       } else {
-         out.println("Missing required argument : --registry-client\n");
 +      }
 +    }
 +  }
 +
 +
 +  public class RemoteRegistryDeleteProviderConfigCommand extends RemoteRegistryDeleteCommand {
 +    static final String USAGE = "delete-provider-config providerConfig --registry-client name";
 +    static final String DESC = "Deletes a shared provider configuration from the specified remote registry.\n";
 +
 +    public RemoteRegistryDeleteProviderConfigCommand(String entryName) {
 +      super(entryName);
 +    }
 +
 +    @Override
 +    public void execute() throws Exception {
 +      execute(PROVIDER_CONFIG_ENTRY + "/" + entryName);
 +    }
 +
 +    @Override
 +    public String getUsage() {
 +      return USAGE + ":\n\n" + DESC;
 +    }
 +  }
 +
 +
 +  public class RemoteRegistryDeleteDescriptorCommand extends RemoteRegistryDeleteCommand {
 +    static final String USAGE = "delete-descriptor descriptor --registry-client name";
 +    static final String DESC = "Deletes a simple descriptor from the specified remote registry.\n";
 +
 +    public RemoteRegistryDeleteDescriptorCommand(String entryName) {
 +      super(entryName);
 +    }
 +
 +    @Override
 +    public void execute() throws Exception {
-       execute(DESCRIPTORS__ENTRY + "/" + entryName);
++      execute(DESCRIPTORS_ENTRY + "/" + entryName);
 +    }
 +
 +    @Override
 +    public String getUsage() {
 +      return USAGE + ":\n\n" + DESC;
 +    }
 +  }
 +
 +
 +  private static Properties loadBuildProperties() {
 +    Properties properties = new Properties();
 +    InputStream inputStream = KnoxCLI.class.getClassLoader().getResourceAsStream( "build.properties" );
 +    if( inputStream != null ) {
 +      try {
 +        properties.load( inputStream );
 +        inputStream.close();
 +      } catch( IOException e ) {
 +        // Ignore.
 +      }
 +    }
 +    return properties;
 +  }
 +
 +  /**
 +   * @param args
 +   * @throws Exception
 +   */
 +  public static void main(String[] args) throws Exception {
 +    PropertyConfigurator.configure( System.getProperty( "log4j.configuration" ) );
 +    int res = ToolRunner.run(new GatewayConfigImpl(), new KnoxCLI(), args);
 +    System.exit(res);
 +  }
 +}


[02/16] knox git commit: KNOX-1151

Posted by mo...@apache.org.
KNOX-1151


Project: http://git-wip-us.apache.org/repos/asf/knox/repo
Commit: http://git-wip-us.apache.org/repos/asf/knox/commit/2b77fe10
Tree: http://git-wip-us.apache.org/repos/asf/knox/tree/2b77fe10
Diff: http://git-wip-us.apache.org/repos/asf/knox/diff/2b77fe10

Branch: refs/heads/KNOX-998-Package_Restructuring
Commit: 2b77fe102878f2c1e1e9831cfe1be3df75adfbd1
Parents: c65eee2
Author: Phil Zampino <pz...@gmail.com>
Authored: Wed Dec 20 11:10:54 2017 -0500
Committer: Phil Zampino <pz...@gmail.com>
Committed: Wed Dec 20 11:10:54 2017 -0500

----------------------------------------------------------------------
 pom.xml | 5 +++++
 1 file changed, 5 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/knox/blob/2b77fe10/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index dee7279..aae453e 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1152,6 +1152,11 @@
             </dependency>
             <dependency>
                 <groupId>org.apache.curator</groupId>
+                <artifactId>curator-recipes</artifactId>
+                <version>4.0.0</version>
+            </dependency>
+            <dependency>
+                <groupId>org.apache.curator</groupId>
                 <artifactId>curator-client</artifactId>
                 <version>4.0.0</version>
             </dependency>


[14/16] knox git commit: Merge branch 'master' into KNOX-998-Package_Restructuring

Posted by mo...@apache.org.
http://git-wip-us.apache.org/repos/asf/knox/blob/e5fd0622/gateway-discovery-ambari/src/test/java/org/apache/knox/gateway/topology/discovery/ambari/AmbariDynamicServiceURLCreatorTest.java
----------------------------------------------------------------------
diff --cc gateway-discovery-ambari/src/test/java/org/apache/knox/gateway/topology/discovery/ambari/AmbariDynamicServiceURLCreatorTest.java
index f015dd5,0000000..c0b1de8
mode 100644,000000..100644
--- a/gateway-discovery-ambari/src/test/java/org/apache/knox/gateway/topology/discovery/ambari/AmbariDynamicServiceURLCreatorTest.java
+++ b/gateway-discovery-ambari/src/test/java/org/apache/knox/gateway/topology/discovery/ambari/AmbariDynamicServiceURLCreatorTest.java
@@@ -1,876 -1,0 +1,920 @@@
 +/**
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements. See the NOTICE file distributed with this
 + * work for additional information regarding copyright ownership. The ASF
 + * licenses this file to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance with the License.
 + * You may obtain a copy of the License at
 + * <p>
 + * http://www.apache.org/licenses/LICENSE-2.0
 + * <p>
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
 + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
 + * License for the specific language governing permissions and limitations under
 + * the License.
 + */
 +package org.apache.knox.gateway.topology.discovery.ambari;
 +
 +import org.apache.commons.io.FileUtils;
 +import org.easymock.EasyMock;
 +import org.junit.Test;
 +
 +import java.io.File;
 +import java.net.MalformedURLException;
 +import java.net.URI;
 +import java.net.URISyntaxException;
 +import java.util.Arrays;
 +import java.util.Collections;
 +import java.util.HashMap;
 +import java.util.LinkedList;
 +import java.util.List;
 +import java.util.Map;
 +
 +import static junit.framework.TestCase.assertTrue;
 +import static junit.framework.TestCase.fail;
 +import static org.junit.Assert.assertEquals;
++import static org.junit.Assert.assertFalse;
 +import static org.junit.Assert.assertNotNull;
 +
 +
 +public class AmbariDynamicServiceURLCreatorTest {
 +
 +    @Test
 +    public void testHiveURLFromInternalMapping() throws Exception {
 +        testHiveURL(null);
 +    }
 +
 +    @Test
 +    public void testHiveURLFromExternalMapping() throws Exception {
 +        testHiveURL(TEST_MAPPING_CONFIG);
 +    }
 +
 +    private void testHiveURL(Object mappingConfiguration) throws Exception {
 +
 +        final String   SERVICE_NAME = "HIVE";
 +        final String[] HOSTNAMES    = {"host3", "host2", "host4"};
 +        final String   HTTP_PATH    = "cliservice";
 +        final String   HTTP_PORT    = "10001";
 +        final String   BINARY_PORT  = "10000";
 +
 +        String expectedScheme = "http";
 +
 +        final List<String> hiveServerHosts = Arrays.asList(HOSTNAMES);
 +
 +        AmbariComponent hiveServer = EasyMock.createNiceMock(AmbariComponent.class);
 +
 +        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
 +        EasyMock.expect(cluster.getComponent("HIVE_SERVER")).andReturn(hiveServer).anyTimes();
 +        EasyMock.replay(cluster);
 +
 +        // Configure HTTP Transport
 +        EasyMock.expect(hiveServer.getHostNames()).andReturn(hiveServerHosts).anyTimes();
 +        EasyMock.expect(hiveServer.getConfigProperty("hive.server2.use.SSL")).andReturn("false").anyTimes();
 +        EasyMock.expect(hiveServer.getConfigProperty("hive.server2.thrift.http.path")).andReturn(HTTP_PATH).anyTimes();
 +        EasyMock.expect(hiveServer.getConfigProperty("hive.server2.thrift.http.port")).andReturn(HTTP_PORT).anyTimes();
 +        EasyMock.expect(hiveServer.getConfigProperty("hive.server2.transport.mode")).andReturn("http").anyTimes();
 +        EasyMock.replay(hiveServer);
 +
 +        // Run the test
 +        AmbariDynamicServiceURLCreator builder = newURLCreator(cluster, mappingConfiguration);
 +        List<String> urls = builder.create(SERVICE_NAME);
 +        assertEquals(HOSTNAMES.length, urls.size());
 +        validateServiceURLs(urls, HOSTNAMES, expectedScheme, HTTP_PORT, HTTP_PATH);
 +
 +        // Configure BINARY Transport
 +        EasyMock.reset(hiveServer);
 +        EasyMock.expect(hiveServer.getHostNames()).andReturn(hiveServerHosts).anyTimes();
 +        EasyMock.expect(hiveServer.getConfigProperty("hive.server2.use.SSL")).andReturn("false").anyTimes();
 +        EasyMock.expect(hiveServer.getConfigProperty("hive.server2.thrift.http.path")).andReturn("").anyTimes();
 +        EasyMock.expect(hiveServer.getConfigProperty("hive.server2.thrift.http.port")).andReturn(HTTP_PORT).anyTimes();
 +        EasyMock.expect(hiveServer.getConfigProperty("hive.server2.thrift.port")).andReturn(BINARY_PORT).anyTimes();
 +        EasyMock.expect(hiveServer.getConfigProperty("hive.server2.transport.mode")).andReturn("binary").anyTimes();
 +        EasyMock.replay(hiveServer);
 +
 +        // Run the test
 +        urls = builder.create(SERVICE_NAME);
 +        assertEquals(HOSTNAMES.length, urls.size());
 +        validateServiceURLs(urls, HOSTNAMES, expectedScheme, HTTP_PORT, "");
 +
 +        // Configure HTTPS Transport
 +        EasyMock.reset(hiveServer);
 +        EasyMock.expect(hiveServer.getHostNames()).andReturn(hiveServerHosts).anyTimes();
 +        EasyMock.expect(hiveServer.getConfigProperty("hive.server2.use.SSL")).andReturn("true").anyTimes();
 +        EasyMock.expect(hiveServer.getConfigProperty("hive.server2.thrift.http.path")).andReturn(HTTP_PATH).anyTimes();
 +        EasyMock.expect(hiveServer.getConfigProperty("hive.server2.thrift.http.port")).andReturn(HTTP_PORT).anyTimes();
 +        EasyMock.expect(hiveServer.getConfigProperty("hive.server2.transport.mode")).andReturn("http").anyTimes();
 +        EasyMock.replay(hiveServer);
 +
 +        // Run the test
 +        expectedScheme = "https";
 +        urls = builder.create(SERVICE_NAME);
 +        assertEquals(HOSTNAMES.length, urls.size());
 +        validateServiceURLs(urls, HOSTNAMES, expectedScheme, HTTP_PORT, HTTP_PATH);
 +    }
 +
++
 +    @Test
 +    public void testResourceManagerURLFromInternalMapping() throws Exception {
 +        testResourceManagerURL(null);
 +    }
 +
 +    @Test
 +    public void testResourceManagerURLFromExternalMapping() throws Exception {
 +        testResourceManagerURL(TEST_MAPPING_CONFIG);
 +    }
 +
 +    private void testResourceManagerURL(Object mappingConfiguration) throws Exception {
 +
 +        final String HTTP_ADDRESS  = "host2:1111";
 +        final String HTTPS_ADDRESS = "host2:22222";
 +
 +        // HTTP
 +        AmbariComponent resman = EasyMock.createNiceMock(AmbariComponent.class);
 +        setResourceManagerComponentExpectations(resman, HTTP_ADDRESS, HTTPS_ADDRESS, "HTTP");
 +
 +        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
 +        EasyMock.expect(cluster.getComponent("RESOURCEMANAGER")).andReturn(resman).anyTimes();
 +        EasyMock.replay(cluster);
 +
 +        // Run the test
 +        AmbariDynamicServiceURLCreator builder = newURLCreator(cluster, mappingConfiguration);
 +        String url = builder.create("RESOURCEMANAGER").get(0);
 +        assertEquals("http://" + HTTP_ADDRESS + "/ws", url);
 +
 +        // HTTPS
 +        EasyMock.reset(resman);
 +        setResourceManagerComponentExpectations(resman, HTTP_ADDRESS, HTTPS_ADDRESS, "HTTPS_ONLY");
 +
 +        // Run the test
 +        url = builder.create("RESOURCEMANAGER").get(0);
 +        assertEquals("https://" + HTTPS_ADDRESS + "/ws", url);
 +    }
 +
 +    private void setResourceManagerComponentExpectations(final AmbariComponent resmanMock,
 +                                                         final String          httpAddress,
 +                                                         final String          httpsAddress,
 +                                                         final String          httpPolicy) {
 +        EasyMock.expect(resmanMock.getConfigProperty("yarn.resourcemanager.webapp.address")).andReturn(httpAddress).anyTimes();
 +        EasyMock.expect(resmanMock.getConfigProperty("yarn.resourcemanager.webapp.https.address")).andReturn(httpsAddress).anyTimes();
 +        EasyMock.expect(resmanMock.getConfigProperty("yarn.http.policy")).andReturn(httpPolicy).anyTimes();
 +        EasyMock.replay(resmanMock);
 +    }
 +
 +    @Test
 +    public void testJobTrackerURLFromInternalMapping() throws Exception {
 +        testJobTrackerURL(null);
 +    }
 +
 +    @Test
 +    public void testJobTrackerURLFromExternalMapping() throws Exception {
 +        testJobTrackerURL(TEST_MAPPING_CONFIG);
 +    }
 +
 +    private void testJobTrackerURL(Object mappingConfiguration) throws Exception {
 +        final String ADDRESS = "host2:5678";
 +
 +        AmbariComponent resman = EasyMock.createNiceMock(AmbariComponent.class);
 +        EasyMock.expect(resman.getConfigProperty("yarn.resourcemanager.address")).andReturn(ADDRESS).anyTimes();
 +        EasyMock.replay(resman);
 +
 +        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
 +        EasyMock.expect(cluster.getComponent("RESOURCEMANAGER")).andReturn(resman).anyTimes();
 +        EasyMock.replay(cluster);
 +
 +        // Run the test
 +        AmbariDynamicServiceURLCreator builder = newURLCreator(cluster, mappingConfiguration);
 +        String url = builder.create("JOBTRACKER").get(0);
 +        assertEquals("rpc://" + ADDRESS, url);
 +    }
 +
 +    @Test
 +    public void testNameNodeURLFromInternalMapping() throws Exception {
 +        testNameNodeURL(null);
 +    }
 +
 +    @Test
 +    public void testNameNodeURLFromExternalMapping() throws Exception {
 +        testNameNodeURL(TEST_MAPPING_CONFIG);
 +    }
 +
 +    private void testNameNodeURL(Object mappingConfiguration) throws Exception {
 +        final String ADDRESS = "host1:1234";
 +
 +        AmbariComponent namenode = EasyMock.createNiceMock(AmbariComponent.class);
 +        EasyMock.expect(namenode.getConfigProperty("dfs.namenode.rpc-address")).andReturn(ADDRESS).anyTimes();
 +        EasyMock.replay(namenode);
 +
 +        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
 +        EasyMock.expect(cluster.getComponent("NAMENODE")).andReturn(namenode).anyTimes();
 +        EasyMock.replay(cluster);
 +
 +        // Run the test
 +        AmbariDynamicServiceURLCreator builder = newURLCreator(cluster, mappingConfiguration);
 +        String url = builder.create("NAMENODE").get(0);
 +        assertEquals("hdfs://" + ADDRESS, url);
 +    }
 +
++
++    @Test
++    public void testNameNodeHAURLFromInternalMapping() throws Exception {
++        testNameNodeURLHA(null);
++    }
++
++    @Test
++    public void testNameNodeHAURLFromExternalMapping() throws Exception {
++        testNameNodeURLHA(TEST_MAPPING_CONFIG);
++    }
++
++    private void testNameNodeURLHA(Object mappingConfiguration) throws Exception {
++        final String NAMESERVICE = "myNSCluster";
++
++        AmbariComponent namenode = EasyMock.createNiceMock(AmbariComponent.class);
++        EasyMock.expect(namenode.getConfigProperty("dfs.nameservices")).andReturn(NAMESERVICE).anyTimes();
++        EasyMock.replay(namenode);
++
++        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
++        EasyMock.expect(cluster.getComponent("NAMENODE")).andReturn(namenode).anyTimes();
++        EasyMock.replay(cluster);
++
++        // Run the test
++        AmbariDynamicServiceURLCreator builder = newURLCreator(cluster, mappingConfiguration);
++        String url = builder.create("NAMENODE").get(0);
++        assertEquals("hdfs://" + NAMESERVICE, url);
++    }
++
++
 +    @Test
 +    public void testWebHCatURLFromInternalMapping() throws Exception {
 +        testWebHCatURL(null);
 +    }
 +
 +    @Test
 +    public void testWebHCatURLFromExternalMapping() throws Exception {
 +        testWebHCatURL(TEST_MAPPING_CONFIG);
 +    }
 +
 +    private void testWebHCatURL(Object mappingConfiguration) throws Exception {
 +
 +        final String HOSTNAME = "host3";
 +        final String PORT     = "1919";
 +
 +        AmbariComponent webhcatServer = EasyMock.createNiceMock(AmbariComponent.class);
 +        EasyMock.expect(webhcatServer.getConfigProperty("templeton.port")).andReturn(PORT).anyTimes();
 +        List<String> webHcatServerHosts = Collections.singletonList(HOSTNAME);
 +        EasyMock.expect(webhcatServer.getHostNames()).andReturn(webHcatServerHosts).anyTimes();
 +        EasyMock.replay(webhcatServer);
 +
 +        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
 +        EasyMock.expect(cluster.getComponent("WEBHCAT_SERVER")).andReturn(webhcatServer).anyTimes();
 +        EasyMock.replay(cluster);
 +
 +        // Run the test
 +        AmbariDynamicServiceURLCreator builder = newURLCreator(cluster, mappingConfiguration);
 +        String url = builder.create("WEBHCAT").get(0);
 +        assertEquals("http://" + HOSTNAME + ":" + PORT + "/templeton", url);
 +    }
 +
 +    @Test
 +    public void testOozieURLFromInternalMapping() throws Exception {
 +        testOozieURL(null);
 +    }
 +
 +    @Test
 +    public void testOozieURLFromExternalMapping() throws Exception {
 +        testOozieURL(TEST_MAPPING_CONFIG);
 +    }
 +
 +    private void testOozieURL(Object mappingConfiguration) throws Exception {
 +        final String URL = "http://host3:2222";
 +
 +        AmbariComponent oozieServer = EasyMock.createNiceMock(AmbariComponent.class);
 +        EasyMock.expect(oozieServer.getConfigProperty("oozie.base.url")).andReturn(URL).anyTimes();
 +        EasyMock.replay(oozieServer);
 +
 +        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
 +        EasyMock.expect(cluster.getComponent("OOZIE_SERVER")).andReturn(oozieServer).anyTimes();
 +        EasyMock.replay(cluster);
 +
 +        // Run the test
 +        AmbariDynamicServiceURLCreator builder = newURLCreator(cluster, mappingConfiguration);
 +        String url = builder.create("OOZIE").get(0);
 +        assertEquals(URL, url);
 +    }
 +
 +    @Test
 +    public void testWebHBaseURLFromInternalMapping() throws Exception {
 +        testWebHBaseURL(null);
 +    }
 +
 +    @Test
 +    public void testWebHBaseURLFromExternalMapping() throws Exception {
 +        testWebHBaseURL(TEST_MAPPING_CONFIG);
 +    }
 +
 +    private void testWebHBaseURL(Object mappingConfiguration) throws Exception {
 +        final String[] HOSTNAMES = {"host2", "host4"};
 +
 +        AmbariComponent hbaseMaster = EasyMock.createNiceMock(AmbariComponent.class);
 +        List<String> hbaseMasterHosts = Arrays.asList(HOSTNAMES);
 +        EasyMock.expect(hbaseMaster.getHostNames()).andReturn(hbaseMasterHosts).anyTimes();
 +        EasyMock.replay(hbaseMaster);
 +
 +        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
 +        EasyMock.expect(cluster.getComponent("HBASE_MASTER")).andReturn(hbaseMaster).anyTimes();
 +        EasyMock.replay(cluster);
 +
 +        // Run the test
 +        AmbariDynamicServiceURLCreator builder = newURLCreator(cluster, mappingConfiguration);
 +        List<String> urls = builder.create("WEBHBASE");
 +        validateServiceURLs(urls, HOSTNAMES, "http", "60080", null);
 +    }
 +
 +    @Test
 +    public void testWebHdfsURLFromInternalMapping() throws Exception {
 +        testWebHdfsURL(null);
 +    }
 +
 +    @Test
 +    public void testWebHdfsURLFromExternalMapping() throws Exception {
 +        testWebHdfsURL(TEST_MAPPING_CONFIG);
 +    }
 +
-     @Test
-     public void testWebHdfsURLFromSystemPropertyOverride() throws Exception {
-         // Write the test mapping configuration to a temp file
-         File mappingFile = File.createTempFile("mapping-config", "xml");
-         FileUtils.write(mappingFile, OVERRIDE_MAPPING_FILE_CONTENTS, "utf-8");
- 
-         // Set the system property to point to the temp file
-         System.setProperty(AmbariDynamicServiceURLCreator.MAPPING_CONFIG_OVERRIDE_PROPERTY,
-                            mappingFile.getAbsolutePath());
-         try {
-             final String ADDRESS = "host3:1357";
-             // The URL creator should apply the file contents, and create the URL accordingly
-             String url = getTestWebHdfsURL(ADDRESS, null);
- 
-             // Verify the URL matches the pattern from the file
-             assertEquals("http://" + ADDRESS + "/webhdfs/OVERRIDE", url);
-         } finally {
-             // Reset the system property, and delete the temp file
-             System.clearProperty(AmbariDynamicServiceURLCreator.MAPPING_CONFIG_OVERRIDE_PROPERTY);
-             mappingFile.delete();
-         }
-     }
- 
 +    private void testWebHdfsURL(Object mappingConfiguration) throws Exception {
 +        final String ADDRESS = "host3:1357";
 +        assertEquals("http://" + ADDRESS + "/webhdfs", getTestWebHdfsURL(ADDRESS, mappingConfiguration));
 +    }
 +
 +
 +    private String getTestWebHdfsURL(String address, Object mappingConfiguration) throws Exception {
 +        AmbariCluster.ServiceConfiguration hdfsSC = EasyMock.createNiceMock(AmbariCluster.ServiceConfiguration.class);
 +        Map<String, String> hdfsProps = new HashMap<>();
 +        hdfsProps.put("dfs.namenode.http-address", address);
 +        EasyMock.expect(hdfsSC.getProperties()).andReturn(hdfsProps).anyTimes();
 +        EasyMock.replay(hdfsSC);
 +
 +        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
 +        EasyMock.expect(cluster.getServiceConfiguration("HDFS", "hdfs-site")).andReturn(hdfsSC).anyTimes();
 +        EasyMock.replay(cluster);
 +
 +        // Create the URL
-         AmbariDynamicServiceURLCreator creator = newURLCreator(cluster, mappingConfiguration);
-         return creator.create("WEBHDFS").get(0);
++        List<String> urls = ServiceURLFactory.newInstance(cluster).create("WEBHDFS");
++        assertNotNull(urls);
++        assertFalse(urls.isEmpty());
++        return urls.get(0);
++    }
++
++    @Test
++    public void testWebHdfsURLHA() throws Exception {
++        final String NAMESERVICES   = "myNameServicesCluster";
++        final String HTTP_ADDRESS_1 = "host1:50070";
++        final String HTTP_ADDRESS_2 = "host2:50077";
++
++        final String EXPECTED_ADDR_1 = "http://" + HTTP_ADDRESS_1 + "/webhdfs";
++        final String EXPECTED_ADDR_2 = "http://" + HTTP_ADDRESS_2 + "/webhdfs";
++
++        AmbariComponent namenode = EasyMock.createNiceMock(AmbariComponent.class);
++        EasyMock.expect(namenode.getConfigProperty("dfs.nameservices")).andReturn(NAMESERVICES).anyTimes();
++        EasyMock.replay(namenode);
++
++        AmbariCluster.ServiceConfiguration hdfsSC = EasyMock.createNiceMock(AmbariCluster.ServiceConfiguration.class);
++        Map<String, String> hdfsProps = new HashMap<>();
++        hdfsProps.put("dfs.namenode.http-address." + NAMESERVICES + ".nn1", HTTP_ADDRESS_1);
++        hdfsProps.put("dfs.namenode.http-address." + NAMESERVICES + ".nn2", HTTP_ADDRESS_2);
++        EasyMock.expect(hdfsSC.getProperties()).andReturn(hdfsProps).anyTimes();
++        EasyMock.replay(hdfsSC);
++
++        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
++        EasyMock.expect(cluster.getComponent("NAMENODE")).andReturn(namenode).anyTimes();
++        EasyMock.expect(cluster.getServiceConfiguration("HDFS", "hdfs-site")).andReturn(hdfsSC).anyTimes();
++        EasyMock.replay(cluster);
++
++        // Create the URL
++        List<String> webhdfsURLs = ServiceURLFactory.newInstance(cluster).create("WEBHDFS");
++        assertEquals(2, webhdfsURLs.size());
++        assertTrue(webhdfsURLs.contains(EXPECTED_ADDR_1));
++        assertTrue(webhdfsURLs.contains(EXPECTED_ADDR_2));
 +    }
 +
 +
 +    @Test
 +    public void testAtlasApiURL() throws Exception {
 +        final String ATLAS_REST_ADDRESS = "http://host2:21000";
 +
 +        AmbariComponent atlasServer = EasyMock.createNiceMock(AmbariComponent.class);
 +        EasyMock.expect(atlasServer.getConfigProperty("atlas.rest.address")).andReturn(ATLAS_REST_ADDRESS).anyTimes();
 +        EasyMock.replay(atlasServer);
 +
 +        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
 +        EasyMock.expect(cluster.getComponent("ATLAS_SERVER")).andReturn(atlasServer).anyTimes();
 +        EasyMock.replay(cluster);
 +
 +        // Run the test
 +        AmbariDynamicServiceURLCreator builder = newURLCreator(cluster, null);
 +        List<String> urls = builder.create("ATLAS-API");
 +        assertEquals(1, urls.size());
 +        assertEquals(ATLAS_REST_ADDRESS, urls.get(0));
 +    }
 +
 +
 +    @Test
 +    public void testAtlasURL() throws Exception {
 +        final String HTTP_PORT = "8787";
 +        final String HTTPS_PORT = "8989";
 +
 +        final String[] HOSTNAMES = {"host1", "host4"};
 +        final List<String> atlastServerHosts = Arrays.asList(HOSTNAMES);
 +
 +        AmbariComponent atlasServer = EasyMock.createNiceMock(AmbariComponent.class);
 +        EasyMock.expect(atlasServer.getHostNames()).andReturn(atlastServerHosts).anyTimes();
 +        EasyMock.expect(atlasServer.getConfigProperty("atlas.enableTLS")).andReturn("false").anyTimes();
 +        EasyMock.expect(atlasServer.getConfigProperty("atlas.server.http.port")).andReturn(HTTP_PORT).anyTimes();
 +        EasyMock.expect(atlasServer.getConfigProperty("atlas.server.https.port")).andReturn(HTTPS_PORT).anyTimes();
 +        EasyMock.replay(atlasServer);
 +
 +        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
 +        EasyMock.expect(cluster.getComponent("ATLAS_SERVER")).andReturn(atlasServer).anyTimes();
 +        EasyMock.replay(cluster);
 +
 +        // Run the test
 +        AmbariDynamicServiceURLCreator builder = newURLCreator(cluster, null);
 +        List<String> urls = builder.create("ATLAS");
 +        validateServiceURLs(urls, HOSTNAMES, "http", HTTP_PORT, null);
 +
 +        EasyMock.reset(atlasServer);
 +        EasyMock.expect(atlasServer.getHostNames()).andReturn(atlastServerHosts).anyTimes();
 +        EasyMock.expect(atlasServer.getConfigProperty("atlas.enableTLS")).andReturn("true").anyTimes();
 +        EasyMock.expect(atlasServer.getConfigProperty("atlas.server.http.port")).andReturn(HTTP_PORT).anyTimes();
 +        EasyMock.expect(atlasServer.getConfigProperty("atlas.server.https.port")).andReturn(HTTPS_PORT).anyTimes();
 +        EasyMock.replay(atlasServer);
 +
 +        // Run the test
 +        urls = builder.create("ATLAS");
 +        validateServiceURLs(urls, HOSTNAMES, "https", HTTPS_PORT, null);
 +    }
 +
 +
 +    @Test
 +    public void testZeppelinURL() throws Exception {
 +        final String HTTP_PORT = "8787";
 +        final String HTTPS_PORT = "8989";
 +
 +        final String[] HOSTNAMES = {"host1", "host4"};
 +        final List<String> atlastServerHosts = Arrays.asList(HOSTNAMES);
 +
 +        AmbariComponent zeppelinMaster = EasyMock.createNiceMock(AmbariComponent.class);
 +        EasyMock.expect(zeppelinMaster.getHostNames()).andReturn(atlastServerHosts).anyTimes();
 +        EasyMock.expect(zeppelinMaster.getConfigProperty("zeppelin.ssl")).andReturn("false").anyTimes();
 +        EasyMock.expect(zeppelinMaster.getConfigProperty("zeppelin.server.port")).andReturn(HTTP_PORT).anyTimes();
 +        EasyMock.expect(zeppelinMaster.getConfigProperty("zeppelin.server.ssl.port")).andReturn(HTTPS_PORT).anyTimes();
 +        EasyMock.replay(zeppelinMaster);
 +
 +        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
 +        EasyMock.expect(cluster.getComponent("ZEPPELIN_MASTER")).andReturn(zeppelinMaster).anyTimes();
 +        EasyMock.replay(cluster);
 +
 +        AmbariDynamicServiceURLCreator builder = newURLCreator(cluster, null);
 +
 +        // Run the test
 +        validateServiceURLs(builder.create("ZEPPELIN"), HOSTNAMES, "http", HTTP_PORT, null);
 +
 +        EasyMock.reset(zeppelinMaster);
 +        EasyMock.expect(zeppelinMaster.getHostNames()).andReturn(atlastServerHosts).anyTimes();
 +        EasyMock.expect(zeppelinMaster.getConfigProperty("zeppelin.ssl")).andReturn("true").anyTimes();
 +        EasyMock.expect(zeppelinMaster.getConfigProperty("zeppelin.server.port")).andReturn(HTTP_PORT).anyTimes();
 +        EasyMock.expect(zeppelinMaster.getConfigProperty("zeppelin.server.ssl.port")).andReturn(HTTPS_PORT).anyTimes();
 +        EasyMock.replay(zeppelinMaster);
 +
 +        // Run the test
 +        validateServiceURLs(builder.create("ZEPPELIN"), HOSTNAMES, "https", HTTPS_PORT, null);
 +    }
 +
 +
 +    @Test
 +    public void testZeppelinUiURL() throws Exception {
 +        final String HTTP_PORT = "8787";
 +        final String HTTPS_PORT = "8989";
 +
 +        final String[] HOSTNAMES = {"host1", "host4"};
 +        final List<String> atlastServerHosts = Arrays.asList(HOSTNAMES);
 +
 +        AmbariComponent zeppelinMaster = EasyMock.createNiceMock(AmbariComponent.class);
 +        EasyMock.expect(zeppelinMaster.getHostNames()).andReturn(atlastServerHosts).anyTimes();
 +        EasyMock.expect(zeppelinMaster.getConfigProperty("zeppelin.ssl")).andReturn("false").anyTimes();
 +        EasyMock.expect(zeppelinMaster.getConfigProperty("zeppelin.server.port")).andReturn(HTTP_PORT).anyTimes();
 +        EasyMock.expect(zeppelinMaster.getConfigProperty("zeppelin.server.ssl.port")).andReturn(HTTPS_PORT).anyTimes();
 +        EasyMock.replay(zeppelinMaster);
 +
 +        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
 +        EasyMock.expect(cluster.getComponent("ZEPPELIN_MASTER")).andReturn(zeppelinMaster).anyTimes();
 +        EasyMock.replay(cluster);
 +
 +        AmbariDynamicServiceURLCreator builder = newURLCreator(cluster, null);
 +
 +        // Run the test
 +        validateServiceURLs(builder.create("ZEPPELINUI"), HOSTNAMES, "http", HTTP_PORT, null);
 +
 +        EasyMock.reset(zeppelinMaster);
 +        EasyMock.expect(zeppelinMaster.getHostNames()).andReturn(atlastServerHosts).anyTimes();
 +        EasyMock.expect(zeppelinMaster.getConfigProperty("zeppelin.ssl")).andReturn("true").anyTimes();
 +        EasyMock.expect(zeppelinMaster.getConfigProperty("zeppelin.server.port")).andReturn(HTTP_PORT).anyTimes();
 +        EasyMock.expect(zeppelinMaster.getConfigProperty("zeppelin.server.ssl.port")).andReturn(HTTPS_PORT).anyTimes();
 +        EasyMock.replay(zeppelinMaster);
 +
 +        // Run the test
 +        validateServiceURLs(builder.create("ZEPPELINUI"), HOSTNAMES, "https", HTTPS_PORT, null);
 +    }
 +
 +
 +    @Test
 +    public void testZeppelinWsURL() throws Exception {
 +        final String HTTP_PORT = "8787";
 +        final String HTTPS_PORT = "8989";
 +
 +        final String[] HOSTNAMES = {"host1", "host4"};
 +        final List<String> atlastServerHosts = Arrays.asList(HOSTNAMES);
 +
 +        AmbariComponent zeppelinMaster = EasyMock.createNiceMock(AmbariComponent.class);
 +        EasyMock.expect(zeppelinMaster.getHostNames()).andReturn(atlastServerHosts).anyTimes();
 +        EasyMock.expect(zeppelinMaster.getConfigProperty("zeppelin.ssl")).andReturn("false").anyTimes();
 +        EasyMock.expect(zeppelinMaster.getConfigProperty("zeppelin.server.port")).andReturn(HTTP_PORT).anyTimes();
 +        EasyMock.expect(zeppelinMaster.getConfigProperty("zeppelin.server.ssl.port")).andReturn(HTTPS_PORT).anyTimes();
 +        EasyMock.replay(zeppelinMaster);
 +
 +        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
 +        EasyMock.expect(cluster.getComponent("ZEPPELIN_MASTER")).andReturn(zeppelinMaster).anyTimes();
 +        EasyMock.replay(cluster);
 +
 +        AmbariDynamicServiceURLCreator builder = newURLCreator(cluster, null);
 +
 +        // Run the test
 +        validateServiceURLs(builder.create("ZEPPELINWS"), HOSTNAMES, "ws", HTTP_PORT, null);
 +
 +        EasyMock.reset(zeppelinMaster);
 +        EasyMock.expect(zeppelinMaster.getHostNames()).andReturn(atlastServerHosts).anyTimes();
 +        EasyMock.expect(zeppelinMaster.getConfigProperty("zeppelin.ssl")).andReturn("true").anyTimes();
 +        EasyMock.expect(zeppelinMaster.getConfigProperty("zeppelin.server.port")).andReturn(HTTP_PORT).anyTimes();
 +        EasyMock.expect(zeppelinMaster.getConfigProperty("zeppelin.server.ssl.port")).andReturn(HTTPS_PORT).anyTimes();
 +        EasyMock.replay(zeppelinMaster);
 +
 +        // Run the test
 +        validateServiceURLs(builder.create("ZEPPELINWS"), HOSTNAMES, "wss", HTTPS_PORT, null);
 +    }
 +
 +
 +    @Test
 +    public void testDruidCoordinatorURL() throws Exception {
 +        final String PORT = "8787";
 +
 +        final String[] HOSTNAMES = {"host3", "host2"};
 +        final List<String> druidCoordinatorHosts = Arrays.asList(HOSTNAMES);
 +
 +        AmbariComponent druidCoordinator = EasyMock.createNiceMock(AmbariComponent.class);
 +        EasyMock.expect(druidCoordinator.getHostNames()).andReturn(druidCoordinatorHosts).anyTimes();
 +        EasyMock.expect(druidCoordinator.getConfigProperty("druid.port")).andReturn(PORT).anyTimes();
 +        EasyMock.replay(druidCoordinator);
 +
 +        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
 +        EasyMock.expect(cluster.getComponent("DRUID_COORDINATOR")).andReturn(druidCoordinator).anyTimes();
 +        EasyMock.replay(cluster);
 +
 +        // Run the test
 +        AmbariDynamicServiceURLCreator builder = newURLCreator(cluster, null);
 +        List<String> urls = builder.create("DRUID-COORDINATOR");
 +        validateServiceURLs(urls, HOSTNAMES, "http", PORT, null);
 +    }
 +
 +
 +    @Test
 +    public void testDruidBrokerURL() throws Exception {
 +        final String PORT = "8181";
 +
 +        final String[] HOSTNAMES = {"host4", "host3"};
 +        final List<String> druidHosts = Arrays.asList(HOSTNAMES);
 +
 +        AmbariComponent druidBroker = EasyMock.createNiceMock(AmbariComponent.class);
 +        EasyMock.expect(druidBroker.getHostNames()).andReturn(druidHosts).anyTimes();
 +        EasyMock.expect(druidBroker.getConfigProperty("druid.port")).andReturn(PORT).anyTimes();
 +        EasyMock.replay(druidBroker);
 +
 +        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
 +        EasyMock.expect(cluster.getComponent("DRUID_BROKER")).andReturn(druidBroker).anyTimes();
 +        EasyMock.replay(cluster);
 +
 +        // Run the test
 +        AmbariDynamicServiceURLCreator builder = newURLCreator(cluster, null);
 +        List<String> urls = builder.create("DRUID-BROKER");
 +        validateServiceURLs(urls, HOSTNAMES, "http", PORT, null);
 +    }
 +
 +
 +    @Test
 +    public void testDruidRouterURL() throws Exception {
 +        final String PORT = "8282";
 +
 +        final String[] HOSTNAMES = {"host5", "host7"};
 +        final List<String> druidHosts = Arrays.asList(HOSTNAMES);
 +
 +        AmbariComponent druidRouter = EasyMock.createNiceMock(AmbariComponent.class);
 +        EasyMock.expect(druidRouter.getHostNames()).andReturn(druidHosts).anyTimes();
 +        EasyMock.expect(druidRouter.getConfigProperty("druid.port")).andReturn(PORT).anyTimes();
 +        EasyMock.replay(druidRouter);
 +
 +        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
 +        EasyMock.expect(cluster.getComponent("DRUID_ROUTER")).andReturn(druidRouter).anyTimes();
 +        EasyMock.replay(cluster);
 +
 +        // Run the test
 +        AmbariDynamicServiceURLCreator builder = newURLCreator(cluster, null);
 +        List<String> urls = builder.create("DRUID-ROUTER");
 +        validateServiceURLs(urls, HOSTNAMES, "http", PORT, null);
 +    }
 +
 +
 +    @Test
 +    public void testDruidOverlordURL() throws Exception {
 +        final String PORT = "8383";
 +
 +        final String[] HOSTNAMES = {"host4", "host1"};
 +        final List<String> druidHosts = Arrays.asList(HOSTNAMES);
 +
 +        AmbariComponent druidOverlord = EasyMock.createNiceMock(AmbariComponent.class);
 +        EasyMock.expect(druidOverlord.getHostNames()).andReturn(druidHosts).anyTimes();
 +        EasyMock.expect(druidOverlord.getConfigProperty("druid.port")).andReturn(PORT).anyTimes();
 +        EasyMock.replay(druidOverlord);
 +
 +        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
 +        EasyMock.expect(cluster.getComponent("DRUID_OVERLORD")).andReturn(druidOverlord).anyTimes();
 +        EasyMock.replay(cluster);
 +
 +        // Run the test
 +        AmbariDynamicServiceURLCreator builder = newURLCreator(cluster, null);
 +        List<String> urls = builder.create("DRUID-OVERLORD");
 +        validateServiceURLs(urls, HOSTNAMES, "http", PORT, null);
 +    }
 +
 +
 +    @Test
 +    public void testDruidSupersetURL() throws Exception {
 +        final String PORT = "8484";
 +
 +        final String[] HOSTNAMES = {"host4", "host1"};
 +        final List<String> druidHosts = Arrays.asList(HOSTNAMES);
 +
 +        AmbariComponent druidSuperset = EasyMock.createNiceMock(AmbariComponent.class);
 +        EasyMock.expect(druidSuperset.getHostNames()).andReturn(druidHosts).anyTimes();
 +        EasyMock.expect(druidSuperset.getConfigProperty("SUPERSET_WEBSERVER_PORT")).andReturn(PORT).anyTimes();
 +        EasyMock.replay(druidSuperset);
 +
 +        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
 +        EasyMock.expect(cluster.getComponent("DRUID_SUPERSET")).andReturn(druidSuperset).anyTimes();
 +        EasyMock.replay(cluster);
 +
 +        // Run the test
 +        AmbariDynamicServiceURLCreator builder = newURLCreator(cluster, null);
 +        List<String> urls = builder.create("SUPERSET");
 +        validateServiceURLs(urls, HOSTNAMES, "http", PORT, null);
 +    }
 +
 +
 +    @Test
 +    public void testMissingServiceComponentURL() throws Exception {
 +        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
 +        EasyMock.expect(cluster.getComponent("DRUID_BROKER")).andReturn(null).anyTimes();
 +        EasyMock.expect(cluster.getComponent("HIVE_SERVER")).andReturn(null).anyTimes();
 +        EasyMock.replay(cluster);
 +
 +        // Run the test
 +        AmbariDynamicServiceURLCreator builder = newURLCreator(cluster, null);
 +        List<String> urls = builder.create("DRUID-BROKER");
 +        assertNotNull(urls);
 +        assertEquals(1, urls.size());
 +        assertEquals("http://{HOST}:{PORT}", urls.get(0));
 +
 +        urls = builder.create("HIVE");
 +        assertNotNull(urls);
 +        assertEquals(1, urls.size());
 +        assertEquals("http://{HOST}:{PORT}/{PATH}", urls.get(0));
 +    }
 +
 +
 +    /**
 +     * Convenience method for creating AmbariDynamicServiceURLCreator instances from different mapping configuration
 +     * input sources.
 +     *
 +     * @param cluster       The Ambari ServiceDiscovery Cluster model
 +     * @param mappingConfig The mapping configuration, or null if the internal config should be used.
 +     *
 +     * @return An AmbariDynamicServiceURLCreator instance, capable of creating service URLs based on the specified
 +     *         cluster's configuration details.
 +     */
 +    private static AmbariDynamicServiceURLCreator newURLCreator(AmbariCluster cluster, Object mappingConfig) throws Exception {
 +        AmbariDynamicServiceURLCreator result = null;
 +
 +        if (mappingConfig == null) {
 +            result = new AmbariDynamicServiceURLCreator(cluster);
 +        } else {
 +            if (mappingConfig instanceof String) {
 +                result = new AmbariDynamicServiceURLCreator(cluster, (String) mappingConfig);
 +            } else if (mappingConfig instanceof File) {
 +                result = new AmbariDynamicServiceURLCreator(cluster, (File) mappingConfig);
 +            }
 +        }
 +
 +        return result;
 +    }
 +
 +
 +    /**
 +     * Validate the specifed HIVE URLs.
 +     *
 +     * @param urlsToValidate The URLs to validate
 +     * @param hostNames      The host names expected in the test URLs
 +     * @param scheme         The expected scheme for the URLs
 +     * @param port           The expected port for the URLs
 +     * @param path           The expected path for the URLs
 +     */
 +    private static void validateServiceURLs(List<String> urlsToValidate,
 +                                            String[]     hostNames,
 +                                            String       scheme,
 +                                            String       port,
 +                                            String       path) throws MalformedURLException {
 +
 +        List<String> hostNamesToTest = new LinkedList<>(Arrays.asList(hostNames));
 +        for (String url : urlsToValidate) {
 +            URI test = null;
 +            try {
 +                // Make sure it's a valid URL
 +                test = new URI(url);
 +            } catch (URISyntaxException e) {
 +                fail(e.getMessage());
 +            }
 +
 +            // Validate the scheme
 +            assertEquals(scheme, test.getScheme());
 +
 +            // Validate the port
 +            assertEquals(port, String.valueOf(test.getPort()));
 +
 +            // If the expected path is not specified, don't validate it
 +            if (path != null) {
 +                assertEquals("/" + path, test.getPath());
 +            }
 +
 +            // Validate the host name
 +            assertTrue(hostNamesToTest.contains(test.getHost()));
 +            hostNamesToTest.remove(test.getHost());
 +        }
 +        assertTrue(hostNamesToTest.isEmpty());
 +    }
 +
 +
 +    private static final String TEST_MAPPING_CONFIG =
 +            "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n" +
 +            "<service-discovery-url-mappings>\n" +
 +            "  <service name=\"NAMENODE\">\n" +
-             "    <url-pattern>hdfs://{DFS_NAMENODE_RPC_ADDRESS}</url-pattern>\n" +
++            "    <url-pattern>hdfs://{DFS_NAMENODE_ADDRESS}</url-pattern>\n" +
 +            "    <properties>\n" +
 +            "      <property name=\"DFS_NAMENODE_RPC_ADDRESS\">\n" +
 +            "        <component>NAMENODE</component>\n" +
 +            "        <config-property>dfs.namenode.rpc-address</config-property>\n" +
 +            "      </property>\n" +
++            "      <property name=\"DFS_NAMESERVICES\">\n" +
++            "        <component>NAMENODE</component>\n" +
++            "        <config-property>dfs.nameservices</config-property>\n" +
++            "      </property>\n" +
++            "      <property name=\"DFS_NAMENODE_ADDRESS\">\n" +
++            "        <config-property>\n" +
++            "          <if property=\"DFS_NAMESERVICES\">\n" +
++            "            <then>DFS_NAMESERVICES</then>\n" +
++            "            <else>DFS_NAMENODE_RPC_ADDRESS</else>\n" +
++            "          </if>\n" +
++            "        </config-property>\n" +
++            "      </property>\n" +
 +            "    </properties>\n" +
 +            "  </service>\n" +
 +            "\n" +
 +            "  <service name=\"JOBTRACKER\">\n" +
 +            "    <url-pattern>rpc://{YARN_RM_ADDRESS}</url-pattern>\n" +
 +            "    <properties>\n" +
 +            "      <property name=\"YARN_RM_ADDRESS\">\n" +
 +            "        <component>RESOURCEMANAGER</component>\n" +
 +            "        <config-property>yarn.resourcemanager.address</config-property>\n" +
 +            "      </property>\n" +
 +            "    </properties>\n" +
 +            "  </service>\n" +
 +            "\n" +
-             "  <service name=\"WEBHDFS\">\n" +
-             "    <url-pattern>http://{WEBHDFS_ADDRESS}/webhdfs</url-pattern>\n" +
-             "    <properties>\n" +
-             "      <property name=\"WEBHDFS_ADDRESS\">\n" +
-             "        <service-config name=\"HDFS\">hdfs-site</service-config>\n" +
-             "        <config-property>dfs.namenode.http-address</config-property>\n" +
-             "      </property>\n" +
-             "    </properties>\n" +
-             "  </service>\n" +
-             "\n" +
 +            "  <service name=\"WEBHCAT\">\n" +
 +            "    <url-pattern>http://{HOST}:{PORT}/templeton</url-pattern>\n" +
 +            "    <properties>\n" +
 +            "      <property name=\"HOST\">\n" +
 +            "        <component>WEBHCAT_SERVER</component>\n" +
 +            "        <hostname/>\n" +
 +            "      </property>\n" +
 +            "      <property name=\"PORT\">\n" +
 +            "        <component>WEBHCAT_SERVER</component>\n" +
 +            "        <config-property>templeton.port</config-property>\n" +
 +            "      </property>\n" +
 +            "    </properties>\n" +
 +            "  </service>\n" +
 +            "\n" +
 +            "  <service name=\"OOZIE\">\n" +
 +            "    <url-pattern>{OOZIE_ADDRESS}</url-pattern>\n" +
 +            "    <properties>\n" +
 +            "      <property name=\"OOZIE_ADDRESS\">\n" +
 +            "        <component>OOZIE_SERVER</component>\n" +
 +            "        <config-property>oozie.base.url</config-property>\n" +
 +            "      </property>\n" +
 +            "    </properties>\n" +
 +            "  </service>\n" +
 +            "\n" +
 +            "  <service name=\"WEBHBASE\">\n" +
 +            "    <url-pattern>http://{HOST}:60080</url-pattern>\n" +
 +            "    <properties>\n" +
 +            "      <property name=\"HOST\">\n" +
 +            "        <component>HBASE_MASTER</component>\n" +
 +            "        <hostname/>\n" +
 +            "      </property>\n" +
 +            "    </properties>\n" +
 +            "  </service>\n" +
 +            "  <service name=\"RESOURCEMANAGER\">\n" +
 +            "    <url-pattern>{SCHEME}://{WEBAPP_ADDRESS}/ws</url-pattern>\n" +
 +            "    <properties>\n" +
 +            "      <property name=\"WEBAPP_HTTP_ADDRESS\">\n" +
 +            "        <component>RESOURCEMANAGER</component>\n" +
 +            "        <config-property>yarn.resourcemanager.webapp.address</config-property>\n" +
 +            "      </property>\n" +
 +            "      <property name=\"WEBAPP_HTTPS_ADDRESS\">\n" +
 +            "        <component>RESOURCEMANAGER</component>\n" +
 +            "        <config-property>yarn.resourcemanager.webapp.https.address</config-property>\n" +
 +            "      </property>\n" +
 +            "      <property name=\"HTTP_POLICY\">\n" +
 +            "        <component>RESOURCEMANAGER</component>\n" +
 +            "        <config-property>yarn.http.policy</config-property>\n" +
 +            "      </property>\n" +
 +            "      <property name=\"SCHEME\">\n" +
 +            "        <config-property>\n" +
 +            "          <if property=\"HTTP_POLICY\" value=\"HTTPS_ONLY\">\n" +
 +            "            <then>https</then>\n" +
 +            "            <else>http</else>\n" +
 +            "          </if>\n" +
 +            "        </config-property>\n" +
 +            "      </property>\n" +
 +            "      <property name=\"WEBAPP_ADDRESS\">\n" +
 +            "        <component>RESOURCEMANAGER</component>\n" +
 +            "        <config-property>\n" +
 +            "          <if property=\"HTTP_POLICY\" value=\"HTTPS_ONLY\">\n" +
 +            "            <then>WEBAPP_HTTPS_ADDRESS</then>\n" +
 +            "            <else>WEBAPP_HTTP_ADDRESS</else>\n" +
 +            "          </if>\n" +
 +            "        </config-property>\n" +
 +            "      </property>\n" +
 +            "    </properties>\n" +
 +            "  </service>\n" +
 +            "  <service name=\"HIVE\">\n" +
 +            "    <url-pattern>{SCHEME}://{HOST}:{PORT}/{PATH}</url-pattern>\n" +
 +            "    <properties>\n" +
 +            "      <property name=\"HOST\">\n" +
 +            "        <component>HIVE_SERVER</component>\n" +
 +            "        <hostname/>\n" +
 +            "      </property>\n" +
 +            "      <property name=\"USE_SSL\">\n" +
 +            "        <component>HIVE_SERVER</component>\n" +
 +            "        <config-property>hive.server2.use.SSL</config-property>\n" +
 +            "      </property>\n" +
 +            "      <property name=\"PATH\">\n" +
 +            "        <component>HIVE_SERVER</component>\n" +
 +            "        <config-property>hive.server2.thrift.http.path</config-property>\n" +
 +            "      </property>\n" +
 +            "      <property name=\"PORT\">\n" +
 +            "        <component>HIVE_SERVER</component>\n" +
 +            "        <config-property>hive.server2.thrift.http.port</config-property>\n" +
 +            "      </property>\n" +
 +            "      <property name=\"SCHEME\">\n" +
 +            "        <config-property>\n" +
 +            "            <if property=\"USE_SSL\" value=\"true\">\n" +
 +            "                <then>https</then>\n" +
 +            "                <else>http</else>\n" +
 +            "            </if>\n" +
 +            "        </config-property>\n" +
 +            "      </property>\n" +
 +            "    </properties>\n" +
 +            "  </service>\n" +
 +            "</service-discovery-url-mappings>\n";
 +
 +
 +    private static final String OVERRIDE_MAPPING_FILE_CONTENTS =
 +            "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n" +
 +            "<service-discovery-url-mappings>\n" +
 +            "  <service name=\"WEBHDFS\">\n" +
 +            "    <url-pattern>http://{WEBHDFS_ADDRESS}/webhdfs/OVERRIDE</url-pattern>\n" +
 +            "    <properties>\n" +
 +            "      <property name=\"WEBHDFS_ADDRESS\">\n" +
 +            "        <service-config name=\"HDFS\">hdfs-site</service-config>\n" +
 +            "        <config-property>dfs.namenode.http-address</config-property>\n" +
 +            "      </property>\n" +
 +            "    </properties>\n" +
 +            "  </service>\n" +
 +            "</service-discovery-url-mappings>\n";
 +
 +}

http://git-wip-us.apache.org/repos/asf/knox/blob/e5fd0622/gateway-server/src/main/java/org/apache/knox/gateway/filter/PortMappingHelperHandler.java
----------------------------------------------------------------------
diff --cc gateway-server/src/main/java/org/apache/knox/gateway/filter/PortMappingHelperHandler.java
index 71df7c4,0000000..69bc0be
mode 100644,000000..100644
--- a/gateway-server/src/main/java/org/apache/knox/gateway/filter/PortMappingHelperHandler.java
+++ b/gateway-server/src/main/java/org/apache/knox/gateway/filter/PortMappingHelperHandler.java
@@@ -1,156 -1,0 +1,156 @@@
 +package org.apache.knox.gateway.filter;
 +
 +import org.apache.commons.lang.StringUtils;
 +import org.apache.knox.gateway.GatewayMessages;
 +import org.apache.knox.gateway.config.GatewayConfig;
 +import org.apache.knox.gateway.i18n.messages.MessagesFactory;
 +import org.eclipse.jetty.server.Request;
 +import org.eclipse.jetty.server.handler.HandlerWrapper;
 +
 +import javax.servlet.ServletException;
 +import javax.servlet.http.HttpServletRequest;
 +import javax.servlet.http.HttpServletResponse;
 +import java.io.IOException;
 +import java.util.Map;
 +
 +/**
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + * <p>
 + * http://www.apache.org/licenses/LICENSE-2.0
 + * <p>
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +
 +/**
 + * This is a helper handler that adjusts the "target" patch of the request.
 + * Used when Topology Port Mapping feature is used.
 + * See KNOX-928
 + * <p>
 + * This class also handles the Default Topology Feature
 + * where, any one of the topologies can be set to "default"
 + * and can listen on the standard Knox port (8443) and
 + * will not need /gateway/{topology} context.
 + * Basically Topology Port Mapping for standard port.
 + * Backwards compatible to Default Topology Feature.
 + *
 + */
 +public class PortMappingHelperHandler extends HandlerWrapper {
 +
 +  private static final GatewayMessages LOG = MessagesFactory
 +      .get(GatewayMessages.class);
 +
 +  final GatewayConfig config;
 +
 +  private String defaultTopologyRedirectContext = null;
 +
 +  public PortMappingHelperHandler(final GatewayConfig config) {
 +
 +    this.config = config;
 +    //Set up context for default topology feature.
 +    String defaultTopologyName = config.getDefaultTopologyName();
 +
 +    // default topology feature can also be enabled using port mapping feature
 +    // config e.g. gateway.port.mapping.{defaultTopologyName}
 +
 +    if(defaultTopologyName == null && config.getGatewayPortMappings().values().contains(config.getGatewayPort())) {
 +
 +      for(final Map.Entry<String, Integer> entry: config.getGatewayPortMappings().entrySet()) {
 +
 +        if(entry.getValue().intValue() == config.getGatewayPort()) {
 +          defaultTopologyRedirectContext = "/" + config.getGatewayPath() + "/" + entry.getKey();
 +          break;
 +        }
 +
 +      }
 +
 +
 +    }
 +
 +    if (defaultTopologyName != null) {
 +      defaultTopologyRedirectContext = config.getDefaultAppRedirectPath();
 +      if (defaultTopologyRedirectContext != null
 +          && defaultTopologyRedirectContext.trim().isEmpty()) {
 +        defaultTopologyRedirectContext = null;
 +      }
 +    }
 +    if (defaultTopologyRedirectContext != null) {
 +      LOG.defaultTopologySetup(defaultTopologyName,
 +          defaultTopologyRedirectContext);
 +    }
 +
 +  }
 +
 +  @Override
 +  public void handle(final String target, final Request baseRequest,
 +      final HttpServletRequest request, final HttpServletResponse response)
 +      throws IOException, ServletException {
 +
 +    String newTarget = target;
-     String baseURI = baseRequest.getUri().toString();
++    String baseURI = baseRequest.getRequestURI();
 +
 +    // If Port Mapping feature enabled
 +    if (config.isGatewayPortMappingEnabled()) {
 +      int targetIndex;
 +      String context = "";
 +
 +      // extract the gateway specific part i.e. {/gatewayName/}
 +      String originalContextPath = "";
 +      targetIndex = StringUtils.ordinalIndexOf(target, "/", 2);
 +
 +      // Match found e.g. /{string}/
 +      if (targetIndex > 0) {
 +        originalContextPath = target.substring(0, targetIndex + 1);
 +      } else if (targetIndex == -1) {
 +        targetIndex = StringUtils.ordinalIndexOf(target, "/", 1);
 +        // For cases "/" and "/hive"
 +        if(targetIndex == 0) {
 +          originalContextPath = target;
 +        }
 +      }
 +
 +      // Match "/{gatewayName}/{topologyName/foo" or "/".
 +      // There could be a case where content is served from the root
 +      // i.e. https://host:port/
 +
 +      if (!baseURI.startsWith(originalContextPath)) {
 +        final int index = StringUtils.ordinalIndexOf(baseURI, "/", 3);
 +        if (index > 0) {
 +          context = baseURI.substring(0, index);
 +        }
 +      }
 +
 +      if(!StringUtils.isBlank(context)) {
 +        LOG.topologyPortMappingAddContext(target, context + target);
 +      }
 +      // Move on to the next handler in chain with updated path
 +      newTarget = context + target;
 +    }
 +
 +    //Backwards compatibility for default topology feature
 +    if (defaultTopologyRedirectContext != null && !baseURI
 +        .startsWith("/" + config.getGatewayPath())) {
 +      newTarget = defaultTopologyRedirectContext + target;
 +
 +      final RequestUpdateHandler.ForwardedRequest newRequest = new RequestUpdateHandler.ForwardedRequest(
 +          request, defaultTopologyRedirectContext, newTarget);
 +
 +      LOG.defaultTopologyForward(target, newTarget);
 +      super.handle(newTarget, baseRequest, newRequest, response);
 +
 +    } else {
 +
 +      super.handle(newTarget, baseRequest, request, response);
 +    }
 +
 +  }
 +}


[13/16] knox git commit: Merge branch 'master' into KNOX-998-Package_Restructuring

Posted by mo...@apache.org.
http://git-wip-us.apache.org/repos/asf/knox/blob/e5fd0622/gateway-server/src/main/java/org/apache/knox/gateway/services/topology/impl/DefaultTopologyService.java
----------------------------------------------------------------------
diff --cc gateway-server/src/main/java/org/apache/knox/gateway/services/topology/impl/DefaultTopologyService.java
index c6e373d,0000000..543d294
mode 100644,000000..100644
--- a/gateway-server/src/main/java/org/apache/knox/gateway/services/topology/impl/DefaultTopologyService.java
+++ b/gateway-server/src/main/java/org/apache/knox/gateway/services/topology/impl/DefaultTopologyService.java
@@@ -1,895 -1,0 +1,915 @@@
 +/**
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +
 +package org.apache.knox.gateway.services.topology.impl;
 +
 +
 +import org.apache.commons.digester3.Digester;
 +import org.apache.commons.digester3.binder.DigesterLoader;
 +import org.apache.commons.io.FileUtils;
 +import org.apache.commons.io.FilenameUtils;
 +import org.apache.commons.io.monitor.FileAlterationListener;
 +import org.apache.commons.io.monitor.FileAlterationListenerAdaptor;
 +import org.apache.commons.io.monitor.FileAlterationMonitor;
 +import org.apache.commons.io.monitor.FileAlterationObserver;
 +import org.apache.knox.gateway.GatewayMessages;
 +import org.apache.knox.gateway.GatewayServer;
 +import org.apache.knox.gateway.audit.api.Action;
 +import org.apache.knox.gateway.audit.api.ActionOutcome;
 +import org.apache.knox.gateway.audit.api.AuditServiceFactory;
 +import org.apache.knox.gateway.audit.api.Auditor;
 +import org.apache.knox.gateway.audit.api.ResourceType;
 +import org.apache.knox.gateway.audit.log4j.audit.AuditConstants;
 +import org.apache.knox.gateway.config.GatewayConfig;
 +import org.apache.knox.gateway.i18n.messages.MessagesFactory;
 +import org.apache.knox.gateway.service.definition.ServiceDefinition;
 +import org.apache.knox.gateway.services.GatewayServices;
 +import org.apache.knox.gateway.services.ServiceLifecycleException;
 +import org.apache.knox.gateway.services.security.AliasService;
 +import org.apache.knox.gateway.services.topology.TopologyService;
 +import org.apache.knox.gateway.topology.ClusterConfigurationMonitorService;
 +import org.apache.knox.gateway.topology.Topology;
 +import org.apache.knox.gateway.topology.TopologyEvent;
 +import org.apache.knox.gateway.topology.TopologyListener;
 +import org.apache.knox.gateway.topology.TopologyMonitor;
 +import org.apache.knox.gateway.topology.TopologyProvider;
 +import org.apache.knox.gateway.topology.builder.TopologyBuilder;
 +import org.apache.knox.gateway.topology.discovery.ClusterConfigurationMonitor;
 +import org.apache.knox.gateway.topology.monitor.RemoteConfigurationMonitor;
 +import org.apache.knox.gateway.topology.monitor.RemoteConfigurationMonitorFactory;
++import org.apache.knox.gateway.topology.simple.SimpleDescriptor;
++import org.apache.knox.gateway.topology.simple.SimpleDescriptorFactory;
 +import org.apache.knox.gateway.topology.simple.SimpleDescriptorHandler;
 +import org.apache.knox.gateway.topology.validation.TopologyValidator;
 +import org.apache.knox.gateway.topology.xml.AmbariFormatXmlTopologyRules;
 +import org.apache.knox.gateway.topology.xml.KnoxFormatXmlTopologyRules;
 +import org.apache.knox.gateway.util.ServiceDefinitionsLoader;
 +import org.eclipse.persistence.jaxb.JAXBContextProperties;
 +import org.xml.sax.SAXException;
 +
 +import javax.xml.bind.JAXBContext;
 +import javax.xml.bind.JAXBException;
 +import javax.xml.bind.Marshaller;
 +import java.io.File;
 +import java.io.FileFilter;
 +import java.io.IOException;
 +import java.net.URISyntaxException;
 +import java.util.ArrayList;
 +import java.util.Arrays;
 +import java.util.Collection;
 +import java.util.Collections;
 +import java.util.HashMap;
 +import java.util.HashSet;
 +import java.util.List;
 +import java.util.Map;
 +import java.util.Set;
 +
 +import static org.apache.commons.digester3.binder.DigesterLoader.newLoader;
 +
 +
 +public class DefaultTopologyService
 +    extends FileAlterationListenerAdaptor
 +    implements TopologyService, TopologyMonitor, TopologyProvider, FileFilter, FileAlterationListener {
 +
 +  private static Auditor auditor = AuditServiceFactory.getAuditService().getAuditor(
 +    AuditConstants.DEFAULT_AUDITOR_NAME, AuditConstants.KNOX_SERVICE_NAME,
 +    AuditConstants.KNOX_COMPONENT_NAME);
 +
 +  private static final List<String> SUPPORTED_TOPOLOGY_FILE_EXTENSIONS = new ArrayList<String>();
 +  static {
 +    SUPPORTED_TOPOLOGY_FILE_EXTENSIONS.add("xml");
 +    SUPPORTED_TOPOLOGY_FILE_EXTENSIONS.add("conf");
 +  }
 +
 +  private static GatewayMessages log = MessagesFactory.get(GatewayMessages.class);
 +  private static DigesterLoader digesterLoader = newLoader(new KnoxFormatXmlTopologyRules(), new AmbariFormatXmlTopologyRules());
 +  private List<FileAlterationMonitor> monitors = new ArrayList<>();
 +  private File topologiesDirectory;
 +  private File sharedProvidersDirectory;
 +  private File descriptorsDirectory;
 +
 +  private DescriptorsMonitor descriptorsMonitor;
 +
 +  private Set<TopologyListener> listeners;
 +  private volatile Map<File, Topology> topologies;
 +  private AliasService aliasService;
 +
 +  private RemoteConfigurationMonitor remoteMonitor = null;
 +
 +  private Topology loadTopology(File file) throws IOException, SAXException, URISyntaxException, InterruptedException {
 +    final long TIMEOUT = 250; //ms
 +    final long DELAY = 50; //ms
 +    log.loadingTopologyFile(file.getAbsolutePath());
 +    Topology topology;
 +    long start = System.currentTimeMillis();
 +    while (true) {
 +      try {
 +        topology = loadTopologyAttempt(file);
 +        break;
 +      } catch (IOException e) {
 +        if (System.currentTimeMillis() - start < TIMEOUT) {
 +          log.failedToLoadTopologyRetrying(file.getAbsolutePath(), Long.toString(DELAY), e);
 +          Thread.sleep(DELAY);
 +        } else {
 +          throw e;
 +        }
 +      } catch (SAXException e) {
 +        if (System.currentTimeMillis() - start < TIMEOUT) {
 +          log.failedToLoadTopologyRetrying(file.getAbsolutePath(), Long.toString(DELAY), e);
 +          Thread.sleep(DELAY);
 +        } else {
 +          throw e;
 +        }
 +      }
 +    }
 +    return topology;
 +  }
 +
 +  private Topology loadTopologyAttempt(File file) throws IOException, SAXException, URISyntaxException {
 +    Topology topology;
 +    Digester digester = digesterLoader.newDigester();
 +    TopologyBuilder topologyBuilder = digester.parse(FileUtils.openInputStream(file));
 +    if (null == topologyBuilder) {
 +      return null;
 +    }
 +    topology = topologyBuilder.build();
 +    topology.setUri(file.toURI());
 +    topology.setName(FilenameUtils.removeExtension(file.getName()));
 +    topology.setTimestamp(file.lastModified());
 +    return topology;
 +  }
 +
 +  private void redeployTopology(Topology topology) {
 +    File topologyFile = new File(topology.getUri());
 +    try {
 +      TopologyValidator tv = new TopologyValidator(topology);
 +
 +      if(tv.validateTopology()) {
 +        throw new SAXException(tv.getErrorString());
 +      }
 +
 +      long start = System.currentTimeMillis();
 +      long limit = 1000L; // One second.
 +      long elapsed = 1;
 +      while (elapsed <= limit) {
 +        try {
 +          long origTimestamp = topologyFile.lastModified();
 +          long setTimestamp = Math.max(System.currentTimeMillis(), topologyFile.lastModified() + elapsed);
 +          if(topologyFile.setLastModified(setTimestamp)) {
 +            long newTimstamp = topologyFile.lastModified();
 +            if(newTimstamp > origTimestamp) {
 +              break;
 +            } else {
 +              Thread.sleep(10);
 +              elapsed = System.currentTimeMillis() - start;
 +              continue;
 +            }
 +          } else {
 +            auditor.audit(Action.REDEPLOY, topology.getName(), ResourceType.TOPOLOGY,
 +                ActionOutcome.FAILURE);
 +            log.failedToRedeployTopology(topology.getName());
 +            break;
 +          }
 +        } catch (InterruptedException e) {
 +          auditor.audit(Action.REDEPLOY, topology.getName(), ResourceType.TOPOLOGY,
 +              ActionOutcome.FAILURE);
 +          log.failedToRedeployTopology(topology.getName(), e);
 +          e.printStackTrace();
 +        }
 +      }
 +    } catch (SAXException e) {
 +      auditor.audit(Action.REDEPLOY, topology.getName(), ResourceType.TOPOLOGY, ActionOutcome.FAILURE);
 +      log.failedToRedeployTopology(topology.getName(), e);
 +    }
 +  }
 +
 +  private List<TopologyEvent> createChangeEvents(
 +      Map<File, Topology> oldTopologies,
 +      Map<File, Topology> newTopologies) {
 +    ArrayList<TopologyEvent> events = new ArrayList<TopologyEvent>();
 +    // Go through the old topologies and find anything that was deleted.
 +    for (File file : oldTopologies.keySet()) {
 +      if (!newTopologies.containsKey(file)) {
 +        events.add(new TopologyEvent(TopologyEvent.Type.DELETED, oldTopologies.get(file)));
 +      }
 +    }
 +    // Go through the new topologies and figure out what was updated vs added.
 +    for (File file : newTopologies.keySet()) {
 +      if (oldTopologies.containsKey(file)) {
 +        Topology oldTopology = oldTopologies.get(file);
 +        Topology newTopology = newTopologies.get(file);
 +        if (newTopology.getTimestamp() > oldTopology.getTimestamp()) {
 +          events.add(new TopologyEvent(TopologyEvent.Type.UPDATED, newTopologies.get(file)));
 +        }
 +      } else {
 +        events.add(new TopologyEvent(TopologyEvent.Type.CREATED, newTopologies.get(file)));
 +      }
 +    }
 +    return events;
 +  }
 +
 +  private File calculateAbsoluteProvidersConfigDir(GatewayConfig config) {
 +    File pcDir = new File(config.getGatewayProvidersConfigDir());
 +    return pcDir.getAbsoluteFile();
 +  }
 +
 +  private File calculateAbsoluteDescriptorsDir(GatewayConfig config) {
 +    File descDir = new File(config.getGatewayDescriptorsDir());
 +    return descDir.getAbsoluteFile();
 +  }
 +
 +  private File calculateAbsoluteTopologiesDir(GatewayConfig config) {
 +    File topoDir = new File(config.getGatewayTopologyDir());
 +    topoDir = topoDir.getAbsoluteFile();
 +    return topoDir;
 +  }
 +
 +  private File calculateAbsoluteConfigDir(GatewayConfig config) {
 +    File configDir;
 +
 +    String path = config.getGatewayConfDir();
 +    configDir = (path != null) ? new File(path) : (new File(config.getGatewayTopologyDir())).getParentFile();
 +
 +    return configDir.getAbsoluteFile();
 +  }
 +
 +  private void  initListener(FileAlterationMonitor  monitor,
 +                            File                   directory,
 +                            FileFilter             filter,
 +                            FileAlterationListener listener) {
 +    monitors.add(monitor);
 +    FileAlterationObserver observer = new FileAlterationObserver(directory, filter);
 +    observer.addListener(listener);
 +    monitor.addObserver(observer);
 +  }
 +
 +  private void initListener(File directory, FileFilter filter, FileAlterationListener listener) throws IOException, SAXException {
 +    // Increasing the monitoring interval to 5 seconds as profiling has shown
 +    // this is rather expensive in terms of generated garbage objects.
 +    initListener(new FileAlterationMonitor(5000L), directory, filter, listener);
 +  }
 +
 +  private Map<File, Topology> loadTopologies(File directory) {
 +    Map<File, Topology> map = new HashMap<>();
 +    if (directory.isDirectory() && directory.canRead()) {
 +      File[] existingTopologies = directory.listFiles(this);
 +      if (existingTopologies != null) {
 +        for (File file : existingTopologies) {
 +          try {
 +            Topology loadTopology = loadTopology(file);
 +            if (null != loadTopology) {
 +              map.put(file, loadTopology);
 +            } else {
 +              auditor.audit(Action.LOAD, file.getAbsolutePath(), ResourceType.TOPOLOGY,
 +                      ActionOutcome.FAILURE);
 +              log.failedToLoadTopology(file.getAbsolutePath());
 +            }
 +          } catch (IOException e) {
 +            // Maybe it makes sense to throw exception
 +            auditor.audit(Action.LOAD, file.getAbsolutePath(), ResourceType.TOPOLOGY,
 +                    ActionOutcome.FAILURE);
 +            log.failedToLoadTopology(file.getAbsolutePath(), e);
 +          } catch (SAXException e) {
 +            // Maybe it makes sense to throw exception
 +            auditor.audit(Action.LOAD, file.getAbsolutePath(), ResourceType.TOPOLOGY,
 +                    ActionOutcome.FAILURE);
 +            log.failedToLoadTopology(file.getAbsolutePath(), e);
 +          } catch (Exception e) {
 +            // Maybe it makes sense to throw exception
 +            auditor.audit(Action.LOAD, file.getAbsolutePath(), ResourceType.TOPOLOGY,
 +                    ActionOutcome.FAILURE);
 +            log.failedToLoadTopology(file.getAbsolutePath(), e);
 +          }
 +        }
 +      }
 +    }
 +    return map;
 +  }
 +
 +  public void setAliasService(AliasService as) {
 +    this.aliasService = as;
 +  }
 +
 +  public void deployTopology(Topology t){
 +
 +    try {
 +      File temp = new File(topologiesDirectory.getAbsolutePath() + "/" + t.getName() + ".xml.temp");
 +      Package topologyPkg = Topology.class.getPackage();
 +      String pkgName = topologyPkg.getName();
 +      String bindingFile = pkgName.replace(".", "/") + "/topology_binding-xml.xml";
 +
 +      Map<String, Object> properties = new HashMap<>(1);
 +      properties.put(JAXBContextProperties.OXM_METADATA_SOURCE, bindingFile);
 +      JAXBContext jc = JAXBContext.newInstance(pkgName, Topology.class.getClassLoader(), properties);
 +      Marshaller mr = jc.createMarshaller();
 +
 +      mr.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true);
 +      mr.marshal(t, temp);
 +
 +      File topology = new File(topologiesDirectory.getAbsolutePath() + "/" + t.getName() + ".xml");
 +      if(!temp.renameTo(topology)) {
 +        FileUtils.forceDelete(temp);
 +        throw new IOException("Could not rename temp file");
 +      }
 +
 +      // This code will check if the topology is valid, and retrieve the errors if it is not.
 +      TopologyValidator validator = new TopologyValidator( topology.getAbsolutePath() );
 +      if( !validator.validateTopology() ){
 +        throw new SAXException( validator.getErrorString() );
 +      }
 +
 +
 +    } catch (JAXBException e) {
 +      auditor.audit(Action.DEPLOY, t.getName(), ResourceType.TOPOLOGY, ActionOutcome.FAILURE);
 +      log.failedToDeployTopology(t.getName(), e);
 +    } catch (IOException io) {
 +      auditor.audit(Action.DEPLOY, t.getName(), ResourceType.TOPOLOGY, ActionOutcome.FAILURE);
 +      log.failedToDeployTopology(t.getName(), io);
 +    } catch (SAXException sx){
 +      auditor.audit(Action.DEPLOY, t.getName(), ResourceType.TOPOLOGY, ActionOutcome.FAILURE);
 +      log.failedToDeployTopology(t.getName(), sx);
 +    }
 +    reloadTopologies();
 +  }
 +
 +  public void redeployTopologies(String topologyName) {
 +
 +    for (Topology topology : getTopologies()) {
 +      if (topologyName == null || topologyName.equals(topology.getName())) {
 +        redeployTopology(topology);
 +      }
 +    }
 +
 +  }
 +
 +  public void reloadTopologies() {
 +    try {
 +      synchronized (this) {
 +        Map<File, Topology> oldTopologies = topologies;
 +        Map<File, Topology> newTopologies = loadTopologies(topologiesDirectory);
 +        List<TopologyEvent> events = createChangeEvents(oldTopologies, newTopologies);
 +        topologies = newTopologies;
 +        notifyChangeListeners(events);
 +      }
 +    } catch (Exception e) {
 +      // Maybe it makes sense to throw exception
 +      log.failedToReloadTopologies(e);
 +    }
 +  }
 +
 +  public void deleteTopology(Topology t) {
 +    File topoDir = topologiesDirectory;
 +
 +    if(topoDir.isDirectory() && topoDir.canRead()) {
 +      for (File f : listFiles(topoDir)) {
 +        String fName = FilenameUtils.getBaseName(f.getName());
 +        if(fName.equals(t.getName())) {
 +          f.delete();
 +        }
 +      }
 +    }
 +    reloadTopologies();
 +  }
 +
 +  private void notifyChangeListeners(List<TopologyEvent> events) {
 +    for (TopologyListener listener : listeners) {
 +      try {
 +        listener.handleTopologyEvent(events);
 +      } catch (RuntimeException e) {
 +        auditor.audit(Action.LOAD, "Topology_Event", ResourceType.TOPOLOGY, ActionOutcome.FAILURE);
 +        log.failedToHandleTopologyEvents(e);
 +      }
 +    }
 +  }
 +
 +  public Map<String, List<String>> getServiceTestURLs(Topology t, GatewayConfig config) {
 +    File tFile = null;
 +    Map<String, List<String>> urls = new HashMap<>();
 +    if (topologiesDirectory.isDirectory() && topologiesDirectory.canRead()) {
 +      for (File f : listFiles(topologiesDirectory)) {
 +        if (FilenameUtils.removeExtension(f.getName()).equals(t.getName())) {
 +          tFile = f;
 +        }
 +      }
 +    }
 +    Set<ServiceDefinition> defs;
 +    if(tFile != null) {
 +      defs = ServiceDefinitionsLoader.getServiceDefinitions(new File(config.getGatewayServicesDir()));
 +
 +      for(ServiceDefinition def : defs) {
 +        urls.put(def.getRole(), def.getTestURLs());
 +      }
 +    }
 +    return urls;
 +  }
 +
 +  public Collection<Topology> getTopologies() {
 +    Map<File, Topology> map = topologies;
 +    return Collections.unmodifiableCollection(map.values());
 +  }
 +
 +  @Override
 +  public boolean deployProviderConfiguration(String name, String content) {
 +    return writeConfig(sharedProvidersDirectory, name, content);
 +  }
 +
 +  @Override
 +  public Collection<File> getProviderConfigurations() {
 +    List<File> providerConfigs = new ArrayList<>();
 +    for (File providerConfig : listFiles(sharedProvidersDirectory)) {
 +      if (SharedProviderConfigMonitor.SUPPORTED_EXTENSIONS.contains(FilenameUtils.getExtension(providerConfig.getName()))) {
 +        providerConfigs.add(providerConfig);
 +      }
 +    }
 +    return providerConfigs;
 +  }
 +
 +  @Override
 +  public boolean deleteProviderConfiguration(String name) {
 +    boolean result = false;
 +
 +    File providerConfig = getExistingFile(sharedProvidersDirectory, name);
 +    if (providerConfig != null) {
 +      List<String> references = descriptorsMonitor.getReferencingDescriptors(providerConfig.getAbsolutePath());
 +      if (references.isEmpty()) {
 +        result = providerConfig.delete();
 +      } else {
 +        log.preventedDeletionOfSharedProviderConfiguration(providerConfig.getAbsolutePath());
 +      }
 +    } else {
 +      result = true; // If it already does NOT exist, then the delete effectively succeeded
 +    }
 +
 +    return result;
 +  }
 +
 +  @Override
 +  public boolean deployDescriptor(String name, String content) {
 +    return writeConfig(descriptorsDirectory, name, content);
 +  }
 +
 +  @Override
 +  public Collection<File> getDescriptors() {
 +    List<File> descriptors = new ArrayList<>();
 +    for (File descriptor : listFiles(descriptorsDirectory)) {
 +      if (DescriptorsMonitor.SUPPORTED_EXTENSIONS.contains(FilenameUtils.getExtension(descriptor.getName()))) {
 +        descriptors.add(descriptor);
 +      }
 +    }
 +    return descriptors;
 +  }
 +
 +  @Override
 +  public boolean deleteDescriptor(String name) {
 +    File descriptor = getExistingFile(descriptorsDirectory, name);
 +    return (descriptor == null) || descriptor.delete();
 +  }
 +
 +  @Override
 +  public void addTopologyChangeListener(TopologyListener listener) {
 +    listeners.add(listener);
 +  }
 +
 +  @Override
 +  public void startMonitor() throws Exception {
 +    // Start the local configuration monitors
 +    for (FileAlterationMonitor monitor : monitors) {
 +      monitor.start();
 +    }
 +
 +    // Start the remote configuration monitor, if it has been initialized
 +    if (remoteMonitor != null) {
 +      try {
 +        remoteMonitor.start();
 +      } catch (Exception e) {
 +        log.remoteConfigurationMonitorStartFailure(remoteMonitor.getClass().getTypeName(), e.getLocalizedMessage(), e);
 +      }
 +    }
 +  }
 +
 +  @Override
 +  public void stopMonitor() throws Exception {
 +    // Stop the local configuration monitors
 +    for (FileAlterationMonitor monitor : monitors) {
 +      monitor.stop();
 +    }
 +
 +    // Stop the remote configuration monitor, if it has been initialized
 +    if (remoteMonitor != null) {
 +      remoteMonitor.stop();
 +    }
 +  }
 +
 +  @Override
 +  public boolean accept(File file) {
 +    boolean accept = false;
 +    if (!file.isDirectory() && file.canRead()) {
 +      String extension = FilenameUtils.getExtension(file.getName());
 +      if (SUPPORTED_TOPOLOGY_FILE_EXTENSIONS.contains(extension)) {
 +        accept = true;
 +      }
 +    }
 +    return accept;
 +  }
 +
 +  @Override
 +  public void onFileCreate(File file) {
 +    onFileChange(file);
 +  }
 +
 +  @Override
 +  public void onFileDelete(java.io.File file) {
 +    // For full topology descriptors, we need to make sure to delete any corresponding simple descriptors to prevent
 +    // unintended subsequent generation of the topology descriptor
 +    for (String ext : DescriptorsMonitor.SUPPORTED_EXTENSIONS) {
 +      File simpleDesc =
 +              new File(descriptorsDirectory, FilenameUtils.getBaseName(file.getName()) + "." + ext);
 +      if (simpleDesc.exists()) {
 +        log.deletingDescriptorForTopologyDeletion(simpleDesc.getName(), file.getName());
 +        simpleDesc.delete();
 +      }
 +    }
 +
 +    onFileChange(file);
 +  }
 +
 +  @Override
 +  public void onFileChange(File file) {
 +    reloadTopologies();
 +  }
 +
 +  @Override
 +  public void stop() {
 +
 +  }
 +
 +  @Override
 +  public void start() {
 +    // Register a cluster configuration monitor listener for change notifications
 +    ClusterConfigurationMonitorService ccms =
 +                  GatewayServer.getGatewayServices().getService(GatewayServices.CLUSTER_CONFIGURATION_MONITOR_SERVICE);
 +    ccms.addListener(new TopologyDiscoveryTrigger(this));
 +  }
 +
 +  @Override
 +  public void init(GatewayConfig config, Map<String, String> options) throws ServiceLifecycleException {
 +
 +    try {
 +      listeners  = new HashSet<>();
 +      topologies = new HashMap<>();
 +
 +      topologiesDirectory = calculateAbsoluteTopologiesDir(config);
 +
 +      File configDirectory = calculateAbsoluteConfigDir(config);
 +      descriptorsDirectory = new File(configDirectory, "descriptors");
 +      sharedProvidersDirectory = new File(configDirectory, "shared-providers");
 +
 +      // Add support for conf/topologies
 +      initListener(topologiesDirectory, this, this);
 +
 +      // Add support for conf/descriptors
 +      descriptorsMonitor = new DescriptorsMonitor(topologiesDirectory, aliasService);
 +      initListener(descriptorsDirectory,
 +                   descriptorsMonitor,
 +                   descriptorsMonitor);
 +      log.monitoringDescriptorChangesInDirectory(descriptorsDirectory.getAbsolutePath());
 +
 +      // Add support for conf/shared-providers
 +      SharedProviderConfigMonitor spm = new SharedProviderConfigMonitor(descriptorsMonitor, descriptorsDirectory);
 +      initListener(sharedProvidersDirectory, spm, spm);
 +      log.monitoringProviderConfigChangesInDirectory(sharedProvidersDirectory.getAbsolutePath());
 +
-       // For all the descriptors currently in the descriptors dir at start-up time, trigger topology generation.
++      // For all the descriptors currently in the descriptors dir at start-up time, determine if topology regeneration
++      // is required.
 +      // This happens prior to the start-up loading of the topologies.
 +      String[] descriptorFilenames =  descriptorsDirectory.list();
 +      if (descriptorFilenames != null) {
 +        for (String descriptorFilename : descriptorFilenames) {
 +          if (DescriptorsMonitor.isDescriptorFile(descriptorFilename)) {
++            String topologyName = FilenameUtils.getBaseName(descriptorFilename);
++            File existingDescriptorFile = getExistingFile(descriptorsDirectory, topologyName);
++
 +            // If there isn't a corresponding topology file, or if the descriptor has been modified since the
 +            // corresponding topology file was generated, then trigger generation of one
-             File matchingTopologyFile = getExistingFile(topologiesDirectory, FilenameUtils.getBaseName(descriptorFilename));
-             if (matchingTopologyFile == null ||
-                     matchingTopologyFile.lastModified() < (new File(descriptorsDirectory, descriptorFilename)).lastModified()) {
-               descriptorsMonitor.onFileChange(new File(descriptorsDirectory, descriptorFilename));
++            File matchingTopologyFile = getExistingFile(topologiesDirectory, topologyName);
++            if (matchingTopologyFile == null || matchingTopologyFile.lastModified() < existingDescriptorFile.lastModified()) {
++              descriptorsMonitor.onFileChange(existingDescriptorFile);
++            } else {
++              // If regeneration is NOT required, then we at least need to report the provider configuration
++              // reference relationship (KNOX-1144)
++              String normalizedDescriptorPath = FilenameUtils.normalize(existingDescriptorFile.getAbsolutePath());
++
++              // Parse the descriptor to determine the provider config reference
++              SimpleDescriptor sd = SimpleDescriptorFactory.parse(normalizedDescriptorPath);
++              if (sd != null) {
++                File referencedProviderConfig =
++                           getExistingFile(sharedProvidersDirectory, FilenameUtils.getBaseName(sd.getProviderConfig()));
++                if (referencedProviderConfig != null) {
++                  List<String> references =
++                         descriptorsMonitor.getReferencingDescriptors(referencedProviderConfig.getAbsolutePath());
++                  if (!references.contains(normalizedDescriptorPath)) {
++                    references.add(normalizedDescriptorPath);
++                  }
++                }
++              }
 +            }
 +          }
 +        }
 +      }
 +
 +      // Initialize the remote configuration monitor, if it has been configured
 +      remoteMonitor = RemoteConfigurationMonitorFactory.get(config);
 +
 +    } catch (IOException | SAXException io) {
 +      throw new ServiceLifecycleException(io.getMessage());
 +    }
 +  }
 +
 +  /**
 +   * Utility method for listing the files in the specified directory.
 +   * This method is "nicer" than the File#listFiles() because it will not return null.
 +   *
 +   * @param directory The directory whose files should be returned.
 +   *
 +   * @return A List of the Files on the directory.
 +   */
 +  private static List<File> listFiles(File directory) {
 +    List<File> result;
 +    File[] files = directory.listFiles();
 +    if (files != null) {
 +      result = Arrays.asList(files);
 +    } else {
 +      result = Collections.emptyList();
 +    }
 +    return result;
 +  }
 +
 +  /**
 +   * Search for a file in the specified directory whose base name (filename without extension) matches the
 +   * specified basename.
 +   *
 +   * @param directory The directory in which to search.
 +   * @param basename  The basename of interest.
 +   *
 +   * @return The matching File
 +   */
 +  private static File getExistingFile(File directory, String basename) {
 +    File match = null;
 +    for (File file : listFiles(directory)) {
 +      if (FilenameUtils.getBaseName(file.getName()).equals(basename)) {
 +        match = file;
 +        break;
 +      }
 +    }
 +    return match;
 +  }
 +
 +  /**
 +   * Write the specified content to a file.
 +   *
 +   * @param dest    The destination directory.
 +   * @param name    The name of the file.
 +   * @param content The contents of the file.
 +   *
 +   * @return true, if the write succeeds; otherwise, false.
 +   */
 +  private static boolean writeConfig(File dest, String name, String content) {
 +    boolean result = false;
 +
 +    File destFile = new File(dest, name);
 +    try {
 +      FileUtils.writeStringToFile(destFile, content);
 +      log.wroteConfigurationFile(destFile.getAbsolutePath());
 +      result = true;
 +    } catch (IOException e) {
 +      log.failedToWriteConfigurationFile(destFile.getAbsolutePath(), e);
 +    }
 +
 +    return result;
 +  }
 +
 +
 +  /**
 +   * Change handler for simple descriptors
 +   */
 +  public static class DescriptorsMonitor extends FileAlterationListenerAdaptor
 +                                          implements FileFilter {
 +
 +    static final List<String> SUPPORTED_EXTENSIONS = new ArrayList<String>();
 +    static {
 +      SUPPORTED_EXTENSIONS.add("json");
 +      SUPPORTED_EXTENSIONS.add("yml");
 +      SUPPORTED_EXTENSIONS.add("yaml");
 +    }
 +
 +    private File topologiesDir;
 +
 +    private AliasService aliasService;
 +
 +    private Map<String, List<String>> providerConfigReferences = new HashMap<>();
 +
 +
 +    static boolean isDescriptorFile(String filename) {
 +      return SUPPORTED_EXTENSIONS.contains(FilenameUtils.getExtension(filename));
 +    }
 +
 +    public DescriptorsMonitor(File topologiesDir, AliasService aliasService) {
 +      this.topologiesDir  = topologiesDir;
 +      this.aliasService   = aliasService;
 +    }
 +
 +    List<String> getReferencingDescriptors(String providerConfigPath) {
-       List<String> result = providerConfigReferences.get(FilenameUtils.normalize(providerConfigPath));
-       if (result == null) {
-         result = Collections.emptyList();
-       }
-       return result;
++      String normalizedPath = FilenameUtils.normalize(providerConfigPath);
++      return providerConfigReferences.computeIfAbsent(normalizedPath, p -> new ArrayList<>());
 +    }
 +
 +    @Override
 +    public void onFileCreate(File file) {
 +      onFileChange(file);
 +    }
 +
 +    @Override
 +    public void onFileDelete(File file) {
 +      // For simple descriptors, we need to make sure to delete any corresponding full topology descriptors to trigger undeployment
 +      for (String ext : DefaultTopologyService.SUPPORTED_TOPOLOGY_FILE_EXTENSIONS) {
 +        File topologyFile =
 +                new File(topologiesDir, FilenameUtils.getBaseName(file.getName()) + "." + ext);
 +        if (topologyFile.exists()) {
 +          log.deletingTopologyForDescriptorDeletion(topologyFile.getName(), file.getName());
 +          topologyFile.delete();
 +        }
 +      }
 +
 +      String normalizedFilePath = FilenameUtils.normalize(file.getAbsolutePath());
 +      String reference = null;
 +      for (Map.Entry<String, List<String>> entry : providerConfigReferences.entrySet()) {
 +        if (entry.getValue().contains(normalizedFilePath)) {
 +          reference = entry.getKey();
 +          break;
 +        }
 +      }
 +
 +      if (reference != null) {
 +        providerConfigReferences.get(reference).remove(normalizedFilePath);
 +        log.removedProviderConfigurationReference(normalizedFilePath, reference);
 +      }
 +    }
 +
 +    @Override
 +    public void onFileChange(File file) {
 +      try {
 +        // When a simple descriptor has been created or modified, generate the new topology descriptor
 +        Map<String, File> result = SimpleDescriptorHandler.handle(file, topologiesDir, aliasService);
 +        log.generatedTopologyForDescriptorChange(result.get("topology").getName(), file.getName());
 +
 +        // Add the provider config reference relationship for handling updates to the provider config
 +        String providerConfig = FilenameUtils.normalize(result.get("reference").getAbsolutePath());
 +        if (!providerConfigReferences.containsKey(providerConfig)) {
 +          providerConfigReferences.put(providerConfig, new ArrayList<String>());
 +        }
 +        List<String> refs = providerConfigReferences.get(providerConfig);
 +        String descriptorName = FilenameUtils.normalize(file.getAbsolutePath());
 +        if (!refs.contains(descriptorName)) {
 +          // Need to check if descriptor had previously referenced another provider config, so it can be removed
 +          for (List<String> descs : providerConfigReferences.values()) {
 +            if (descs.contains(descriptorName)) {
 +              descs.remove(descriptorName);
 +            }
 +          }
 +
 +          // Add the current reference relationship
 +          refs.add(descriptorName);
 +          log.addedProviderConfigurationReference(descriptorName, providerConfig);
 +        }
 +      } catch (Exception e) {
 +        log.simpleDescriptorHandlingError(file.getName(), e);
 +      }
 +    }
 +
 +    @Override
 +    public boolean accept(File file) {
 +      boolean accept = false;
 +      if (!file.isDirectory() && file.canRead()) {
 +        String extension = FilenameUtils.getExtension(file.getName());
 +        if (SUPPORTED_EXTENSIONS.contains(extension)) {
 +          accept = true;
 +        }
 +      }
 +      return accept;
 +    }
 +  }
 +
 +  /**
 +   * Change handler for shared provider configurations
 +   */
 +  public static class SharedProviderConfigMonitor extends FileAlterationListenerAdaptor
 +          implements FileFilter {
 +
 +    static final List<String> SUPPORTED_EXTENSIONS = new ArrayList<>();
 +    static {
 +      SUPPORTED_EXTENSIONS.add("xml");
 +    }
 +
 +    private DescriptorsMonitor descriptorsMonitor;
 +    private File descriptorsDir;
 +
 +
 +    SharedProviderConfigMonitor(DescriptorsMonitor descMonitor, File descriptorsDir) {
 +      this.descriptorsMonitor = descMonitor;
 +      this.descriptorsDir     = descriptorsDir;
 +    }
 +
 +    @Override
 +    public void onFileCreate(File file) {
 +      onFileChange(file);
 +    }
 +
 +    @Override
 +    public void onFileDelete(File file) {
 +      onFileChange(file);
 +    }
 +
 +    @Override
 +    public void onFileChange(File file) {
 +      // For shared provider configuration, we need to update any simple descriptors that reference it
 +      for (File descriptor : getReferencingDescriptors(file)) {
 +        descriptor.setLastModified(System.currentTimeMillis());
 +      }
 +    }
 +
 +    private List<File> getReferencingDescriptors(File sharedProviderConfig) {
 +      List<File> references = new ArrayList<>();
 +
 +      for (File descriptor : listFiles(descriptorsDir)) {
 +        if (DescriptorsMonitor.SUPPORTED_EXTENSIONS.contains(FilenameUtils.getExtension(descriptor.getName()))) {
 +          for (String reference : descriptorsMonitor.getReferencingDescriptors(FilenameUtils.normalize(sharedProviderConfig.getAbsolutePath()))) {
 +            references.add(new File(reference));
 +          }
 +        }
 +      }
 +
 +      return references;
 +    }
 +
 +    @Override
 +    public boolean accept(File file) {
 +      boolean accept = false;
 +      if (!file.isDirectory() && file.canRead()) {
 +        String extension = FilenameUtils.getExtension(file.getName());
 +        if (SUPPORTED_EXTENSIONS.contains(extension)) {
 +          accept = true;
 +        }
 +      }
 +      return accept;
 +    }
 +  }
 +
 +  /**
 +   * Listener for Ambari config change events, which will trigger re-generation (including re-discovery) of the
 +   * affected topologies.
 +   */
 +  private static class TopologyDiscoveryTrigger implements ClusterConfigurationMonitor.ConfigurationChangeListener {
 +
 +    private TopologyService topologyService = null;
 +
 +    TopologyDiscoveryTrigger(TopologyService topologyService) {
 +      this.topologyService = topologyService;
 +    }
 +
 +    @Override
 +    public void onConfigurationChange(String source, String clusterName) {
 +      log.noticedClusterConfigurationChange(source, clusterName);
 +      try {
 +        // Identify any descriptors associated with the cluster configuration change
 +        for (File descriptor : topologyService.getDescriptors()) {
 +          String descriptorContent = FileUtils.readFileToString(descriptor);
 +          if (descriptorContent.contains(source)) {
 +            if (descriptorContent.contains(clusterName)) {
 +              log.triggeringTopologyRegeneration(source, clusterName, descriptor.getAbsolutePath());
 +              // 'Touch' the descriptor to trigger re-generation of the associated topology
 +              descriptor.setLastModified(System.currentTimeMillis());
 +            }
 +          }
 +        }
 +      } catch (Exception e) {
 +        log.errorRespondingToConfigChange(source, clusterName, e);
 +      }
 +    }
 +  }
 +
 +}

http://git-wip-us.apache.org/repos/asf/knox/blob/e5fd0622/gateway-server/src/main/java/org/apache/knox/gateway/topology/monitor/DefaultRemoteConfigurationMonitor.java
----------------------------------------------------------------------
diff --cc gateway-server/src/main/java/org/apache/knox/gateway/topology/monitor/DefaultRemoteConfigurationMonitor.java
index efafee0,0000000..37d1ca6
mode 100644,000000..100644
--- a/gateway-server/src/main/java/org/apache/knox/gateway/topology/monitor/DefaultRemoteConfigurationMonitor.java
+++ b/gateway-server/src/main/java/org/apache/knox/gateway/topology/monitor/DefaultRemoteConfigurationMonitor.java
@@@ -1,228 -1,0 +1,246 @@@
 +/**
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements. See the NOTICE file distributed with this
 + * work for additional information regarding copyright ownership. The ASF
 + * licenses this file to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance with the License.
 + * You may obtain a copy of the License at
 + * <p>
 + * http://www.apache.org/licenses/LICENSE-2.0
 + * <p>
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
 + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
 + * License for the specific language governing permissions and limitations under
 + * the License.
 + */
 +package org.apache.knox.gateway.topology.monitor;
 +
 +import org.apache.commons.io.FileUtils;
 +import org.apache.knox.gateway.GatewayMessages;
 +import org.apache.knox.gateway.config.GatewayConfig;
 +import org.apache.knox.gateway.i18n.messages.MessagesFactory;
 +import org.apache.knox.gateway.services.config.client.RemoteConfigurationRegistryClient.ChildEntryListener;
 +import org.apache.knox.gateway.services.config.client.RemoteConfigurationRegistryClient.EntryListener;
 +import org.apache.knox.gateway.services.config.client.RemoteConfigurationRegistryClient;
 +import org.apache.knox.gateway.services.config.client.RemoteConfigurationRegistryClientService;
 +import org.apache.zookeeper.ZooDefs;
 +
 +import java.io.File;
 +import java.io.IOException;
 +import java.util.ArrayList;
++import java.util.Arrays;
 +import java.util.Collections;
 +import java.util.List;
 +
 +
 +class DefaultRemoteConfigurationMonitor implements RemoteConfigurationMonitor {
 +
 +    private static final String NODE_KNOX = "/knox";
 +    private static final String NODE_KNOX_CONFIG = NODE_KNOX + "/config";
 +    private static final String NODE_KNOX_PROVIDERS = NODE_KNOX_CONFIG + "/shared-providers";
 +    private static final String NODE_KNOX_DESCRIPTORS = NODE_KNOX_CONFIG + "/descriptors";
 +
 +    private static GatewayMessages log = MessagesFactory.get(GatewayMessages.class);
 +
 +    // N.B. This is ZooKeeper-specific, and should be abstracted when another registry is supported
 +    private static final RemoteConfigurationRegistryClient.EntryACL AUTHENTICATED_USERS_ALL;
 +    static {
 +        AUTHENTICATED_USERS_ALL = new RemoteConfigurationRegistryClient.EntryACL() {
 +            public String getId() {
 +                return "";
 +            }
 +
 +            public String getType() {
 +                return "auth";
 +            }
 +
 +            public Object getPermissions() {
 +                return ZooDefs.Perms.ALL;
 +            }
 +
 +            public boolean canRead() {
 +                return true;
 +            }
 +
 +            public boolean canWrite() {
 +                return true;
 +            }
 +        };
 +    }
 +
 +    private RemoteConfigurationRegistryClient client = null;
 +
 +    private File providersDir;
 +    private File descriptorsDir;
 +
 +    /**
 +     * @param config                The gateway configuration
 +     * @param registryClientService The service from which the remote registry client should be acquired.
 +     */
 +    DefaultRemoteConfigurationMonitor(GatewayConfig                            config,
 +                                      RemoteConfigurationRegistryClientService registryClientService) {
 +        this.providersDir   = new File(config.getGatewayProvidersConfigDir());
 +        this.descriptorsDir = new File(config.getGatewayDescriptorsDir());
 +
 +        if (registryClientService != null) {
 +            String clientName = config.getRemoteConfigurationMonitorClientName();
 +            if (clientName != null) {
 +                this.client = registryClientService.get(clientName);
 +                if (this.client == null) {
 +                    log.unresolvedClientConfigurationForRemoteMonitoring(clientName);
 +                }
 +            } else {
 +                log.missingClientConfigurationForRemoteMonitoring();
 +            }
 +        }
 +    }
 +
 +    @Override
 +    public void start() throws Exception {
 +        if (client == null) {
 +            throw new IllegalStateException("Failed to acquire a remote configuration registry client.");
 +        }
 +
 +        final String monitorSource = client.getAddress();
 +        log.startingRemoteConfigurationMonitor(monitorSource);
 +
 +        // Ensure the existence of the expected entries and their associated ACLs
 +        ensureEntries();
 +
 +        // Confirm access to the remote provider configs directory znode
 +        List<String> providerConfigs = client.listChildEntries(NODE_KNOX_PROVIDERS);
 +        if (providerConfigs == null) {
 +            // Either the ZNode does not exist, or there is an authentication problem
 +            throw new IllegalStateException("Unable to access remote path: " + NODE_KNOX_PROVIDERS);
++        } else {
++            // Download any existing provider configs in the remote registry, which either do not exist locally, or have
++            // been modified, so that they are certain to be present when this monitor downloads any descriptors that
++            // reference them.
++            for (String providerConfig : providerConfigs) {
++                File localFile = new File(providersDir, providerConfig);
++
++                byte[] remoteContent = client.getEntryData(NODE_KNOX_PROVIDERS + "/" + providerConfig).getBytes();
++                if (!localFile.exists() || !Arrays.equals(remoteContent, FileUtils.readFileToByteArray(localFile))) {
++                    FileUtils.writeByteArrayToFile(localFile, remoteContent);
++                    log.downloadedRemoteConfigFile(providersDir.getName(), providerConfig);
++                }
++            }
 +        }
 +
 +        // Confirm access to the remote descriptors directory znode
 +        List<String> descriptors = client.listChildEntries(NODE_KNOX_DESCRIPTORS);
 +        if (descriptors == null) {
 +            // Either the ZNode does not exist, or there is an authentication problem
 +            throw new IllegalStateException("Unable to access remote path: " + NODE_KNOX_DESCRIPTORS);
 +        }
 +
 +        // Register a listener for provider config znode additions/removals
 +        client.addChildEntryListener(NODE_KNOX_PROVIDERS, new ConfigDirChildEntryListener(providersDir));
 +
 +        // Register a listener for descriptor znode additions/removals
 +        client.addChildEntryListener(NODE_KNOX_DESCRIPTORS, new ConfigDirChildEntryListener(descriptorsDir));
 +
 +        log.monitoringRemoteConfigurationSource(monitorSource);
 +    }
 +
 +
 +    @Override
 +    public void stop() throws Exception {
 +        client.removeEntryListener(NODE_KNOX_PROVIDERS);
 +        client.removeEntryListener(NODE_KNOX_DESCRIPTORS);
 +    }
 +
 +    private void ensureEntries() {
 +        ensureEntry(NODE_KNOX);
 +        ensureEntry(NODE_KNOX_CONFIG);
 +        ensureEntry(NODE_KNOX_PROVIDERS);
 +        ensureEntry(NODE_KNOX_DESCRIPTORS);
 +    }
 +
 +    private void ensureEntry(String name) {
 +        if (!client.entryExists(name)) {
 +            client.createEntry(name);
 +        } else {
 +            // Validate the ACL
 +            List<RemoteConfigurationRegistryClient.EntryACL> entryACLs = client.getACL(name);
 +            for (RemoteConfigurationRegistryClient.EntryACL entryACL : entryACLs) {
 +                // N.B. This is ZooKeeper-specific, and should be abstracted when another registry is supported
 +                // For now, check for ZooKeeper world:anyone with ANY permissions (even read-only)
 +                if (entryACL.getType().equals("world") && entryACL.getId().equals("anyone")) {
 +                    log.suspectWritableRemoteConfigurationEntry(name);
 +
 +                    // If the client is authenticated, but "anyone" can write the content, then the content may not
 +                    // be trustworthy.
 +                    if (client.isAuthenticationConfigured()) {
 +                        log.correctingSuspectWritableRemoteConfigurationEntry(name);
 +
 +                        // Replace the existing ACL with one that permits only authenticated users
 +                        client.setACL(name, Collections.singletonList(AUTHENTICATED_USERS_ALL));
 +                  }
 +                }
 +            }
 +        }
 +    }
 +
 +    private static class ConfigDirChildEntryListener implements ChildEntryListener {
 +        File localDir;
 +
 +        ConfigDirChildEntryListener(File localDir) {
 +            this.localDir = localDir;
 +        }
 +
 +        @Override
 +        public void childEvent(RemoteConfigurationRegistryClient client, Type type, String path) {
 +            File localFile = new File(localDir, path.substring(path.lastIndexOf("/") + 1));
 +
 +            switch (type) {
 +                case REMOVED:
 +                    FileUtils.deleteQuietly(localFile);
 +                    log.deletedRemoteConfigFile(localDir.getName(), localFile.getName());
 +                    try {
 +                        client.removeEntryListener(path);
 +                    } catch (Exception e) {
 +                        log.errorRemovingRemoteConfigurationListenerForPath(path, e);
 +                    }
 +                    break;
 +                case ADDED:
 +                    try {
 +                        client.addEntryListener(path, new ConfigEntryListener(localDir));
 +                    } catch (Exception e) {
 +                        log.errorAddingRemoteConfigurationListenerForPath(path, e);
 +                    }
 +                    break;
 +            }
 +        }
 +    }
 +
 +    private static class ConfigEntryListener implements EntryListener {
 +        private File localDir;
 +
 +        ConfigEntryListener(File localDir) {
 +            this.localDir = localDir;
 +        }
 +
 +        @Override
 +        public void entryChanged(RemoteConfigurationRegistryClient client, String path, byte[] data) {
 +            File localFile = new File(localDir, path.substring(path.lastIndexOf("/")));
 +            if (data != null) {
 +                try {
-                     FileUtils.writeByteArrayToFile(localFile, data);
-                     log.downloadedRemoteConfigFile(localDir.getName(), localFile.getName());
++                    // If there is no corresponding local file, or the content is different from the existing local
++                    // file, write the data to the local file.
++                    if (!localFile.exists() || !Arrays.equals(FileUtils.readFileToByteArray(localFile), data)) {
++                        FileUtils.writeByteArrayToFile(localFile, data);
++                        log.downloadedRemoteConfigFile(localDir.getName(), localFile.getName());
++                    }
 +                } catch (IOException e) {
 +                    log.errorDownloadingRemoteConfiguration(path, e);
 +                }
 +            } else {
 +                FileUtils.deleteQuietly(localFile);
 +                log.deletedRemoteConfigFile(localDir.getName(), localFile.getName());
 +            }
 +        }
 +    }
 +
 +}


[15/16] knox git commit: Merge branch 'master' into KNOX-998-Package_Restructuring

Posted by mo...@apache.org.
Merge branch 'master' into KNOX-998-Package_Restructuring

# Conflicts:
#	gateway-server/src/main/java/org/apache/knox/gateway/services/topology/impl/DefaultTopologyService.java


Project: http://git-wip-us.apache.org/repos/asf/knox/repo
Commit: http://git-wip-us.apache.org/repos/asf/knox/commit/e5fd0622
Tree: http://git-wip-us.apache.org/repos/asf/knox/tree/e5fd0622
Diff: http://git-wip-us.apache.org/repos/asf/knox/diff/e5fd0622

Branch: refs/heads/KNOX-998-Package_Restructuring
Commit: e5fd0622493a7e3c62811ea03ff7931c979dd87a
Parents: e766b3b 99e6a54
Author: Sandeep More <mo...@apache.org>
Authored: Tue Jan 9 14:25:16 2018 -0500
Committer: Sandeep More <mo...@apache.org>
Committed: Tue Jan 9 14:25:16 2018 -0500

----------------------------------------------------------------------
 LICENSE                                         |  40 ++++-
 NOTICE                                          |   4 +-
 .../discovery/ambari/ServiceURLCreator.java     |  32 ++++
 .../discovery/ambari/ServiceURLFactory.java     |  75 +++++++++
 .../discovery/ambari/WebHdfsUrlCreator.java     |  84 ++++++++++
 .../discovery/ambari/AmbariClientCommon.java    |  14 +-
 .../discovery/ambari/AmbariCluster.java         |   6 +-
 .../ambari/AmbariConfigurationMonitor.java      |  52 +++++--
 .../ambari/AmbariDynamicServiceURLCreator.java  |   4 +-
 .../discovery/ambari/PropertyEqualsHandler.java |  20 ++-
 .../ambari/ServiceURLPropertyConfig.java        |   7 +-
 .../ambari-service-discovery-url-mappings.xml   |  24 +--
 .../AmbariDynamicServiceURLCreatorTest.java     | 116 +++++++++-----
 gateway-release/src/assembly.xml                |   1 +
 gateway-server/pom.xml                          |   2 +-
 .../filter/PortMappingHelperHandler.java        |   2 +-
 .../topology/impl/DefaultTopologyService.java   |  40 +++--
 .../DefaultRemoteConfigurationMonitor.java      |  22 ++-
 .../org/apache/knox/gateway/util/KnoxCLI.java   | 153 ++++++++++++-------
 .../ZooKeeperConfigurationMonitorTest.java      |  17 ++-
 .../apache/knox/gateway/util/KnoxCLITest.java   |  16 ++
 gateway-service-remoteconfig/pom.xml            |   5 -
 gateway-test-release/pom.xml                    |  72 ++++++++-
 pom.xml                                         |  32 +++-
 24 files changed, 690 insertions(+), 150 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/knox/blob/e5fd0622/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/AmbariClientCommon.java
----------------------------------------------------------------------
diff --cc gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/AmbariClientCommon.java
index 9e5dcb3,0000000..1314305
mode 100644,000000..100644
--- a/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/AmbariClientCommon.java
+++ b/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/AmbariClientCommon.java
@@@ -1,102 -1,0 +1,108 @@@
 +/**
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements. See the NOTICE file distributed with this
 + * work for additional information regarding copyright ownership. The ASF
 + * licenses this file to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance with the License.
 + * You may obtain a copy of the License at
 + * <p>
 + * http://www.apache.org/licenses/LICENSE-2.0
 + * <p>
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
 + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
 + * License for the specific language governing permissions and limitations under
 + * the License.
 + */
 +package org.apache.knox.gateway.topology.discovery.ambari;
 +
 +import net.minidev.json.JSONArray;
 +import net.minidev.json.JSONObject;
 +import org.apache.knox.gateway.i18n.messages.MessagesFactory;
 +import org.apache.knox.gateway.services.security.AliasService;
 +import org.apache.knox.gateway.topology.discovery.ServiceDiscoveryConfig;
 +
 +import java.util.HashMap;
 +import java.util.Map;
 +
 +class AmbariClientCommon {
 +
 +    static final String AMBARI_CLUSTERS_URI = "/api/v1/clusters";
 +
 +    static final String AMBARI_HOSTROLES_URI =
 +                                    AMBARI_CLUSTERS_URI + "/%s/services?fields=components/host_components/HostRoles";
 +
 +    static final String AMBARI_SERVICECONFIGS_URI =
 +                                    AMBARI_CLUSTERS_URI + "/%s/configurations/service_config_versions?is_current=true";
 +
 +    private static final AmbariServiceDiscoveryMessages log = MessagesFactory.get(AmbariServiceDiscoveryMessages.class);
 +
 +    private RESTInvoker restClient;
 +
 +
 +    AmbariClientCommon(AliasService aliasService) {
 +        this(new RESTInvoker(aliasService));
 +    }
 +
 +
 +    AmbariClientCommon(RESTInvoker restInvoker) {
 +        this.restClient = restInvoker;
 +    }
 +
 +
 +
 +    Map<String, Map<String, AmbariCluster.ServiceConfiguration>> getActiveServiceConfigurations(String clusterName,
 +                                                                                                ServiceDiscoveryConfig config) {
-         return getActiveServiceConfigurations(config.getAddress(),
-                                               clusterName,
-                                               config.getUser(),
-                                               config.getPasswordAlias());
++        Map<String, Map<String, AmbariCluster.ServiceConfiguration>> activeConfigs = null;
++
++        if (config != null) {
++            activeConfigs = getActiveServiceConfigurations(config.getAddress(),
++                                                           clusterName,
++                                                           config.getUser(),
++                                                           config.getPasswordAlias());
++        }
++
++        return activeConfigs;
 +    }
 +
 +
 +    Map<String, Map<String, AmbariCluster.ServiceConfiguration>> getActiveServiceConfigurations(String discoveryAddress,
 +                                                                                                String clusterName,
 +                                                                                                String discoveryUser,
 +                                                                                                String discoveryPwdAlias) {
 +        Map<String, Map<String, AmbariCluster.ServiceConfiguration>> serviceConfigurations = new HashMap<>();
 +
 +        String serviceConfigsURL = String.format("%s" + AMBARI_SERVICECONFIGS_URI, discoveryAddress, clusterName);
 +
 +        JSONObject serviceConfigsJSON = restClient.invoke(serviceConfigsURL, discoveryUser, discoveryPwdAlias);
 +        if (serviceConfigsJSON != null) {
 +            // Process the service configurations
 +            JSONArray serviceConfigs = (JSONArray) serviceConfigsJSON.get("items");
 +            for (Object serviceConfig : serviceConfigs) {
 +                String serviceName = (String) ((JSONObject) serviceConfig).get("service_name");
 +                JSONArray configurations = (JSONArray) ((JSONObject) serviceConfig).get("configurations");
 +                for (Object configuration : configurations) {
 +                    String configType = (String) ((JSONObject) configuration).get("type");
 +                    String configVersion = String.valueOf(((JSONObject) configuration).get("version"));
 +
 +                    Map<String, String> configProps = new HashMap<>();
 +                    JSONObject configProperties = (JSONObject) ((JSONObject) configuration).get("properties");
 +                    for (String propertyName : configProperties.keySet()) {
 +                        configProps.put(propertyName, String.valueOf(((JSONObject) configProperties).get(propertyName)));
 +                    }
 +                    if (!serviceConfigurations.containsKey(serviceName)) {
 +                        serviceConfigurations.put(serviceName, new HashMap<>());
 +                    }
 +                    serviceConfigurations.get(serviceName).put(configType,
 +                                                               new AmbariCluster.ServiceConfiguration(configType,
 +                                                                                                      configVersion,
 +                                                                                                      configProps));
 +                }
 +            }
 +        }
 +
 +        return serviceConfigurations;
 +    }
 +
 +
 +}

http://git-wip-us.apache.org/repos/asf/knox/blob/e5fd0622/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/AmbariCluster.java
----------------------------------------------------------------------
diff --cc gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/AmbariCluster.java
index bcf3adc,0000000..9d3fa74
mode 100644,000000..100644
--- a/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/AmbariCluster.java
+++ b/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/AmbariCluster.java
@@@ -1,120 -1,0 +1,120 @@@
 +/**
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements. See the NOTICE file distributed with this
 + * work for additional information regarding copyright ownership. The ASF
 + * licenses this file to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance with the License.
 + * You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
 + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
 + * License for the specific language governing permissions and limitations under
 + * the License.
 + */
 +package org.apache.knox.gateway.topology.discovery.ambari;
 +
 +import org.apache.knox.gateway.topology.discovery.ServiceDiscovery;
 +
 +import java.util.ArrayList;
 +import java.util.HashMap;
 +import java.util.List;
 +import java.util.Map;
 +
 +class AmbariCluster implements ServiceDiscovery.Cluster {
 +
 +    private String name = null;
 +
-     private AmbariDynamicServiceURLCreator urlCreator;
++    private ServiceURLFactory urlFactory;
 +
 +    private Map<String, Map<String, ServiceConfiguration>> serviceConfigurations = new HashMap<>();
 +
 +    private Map<String, AmbariComponent> components = null;
 +
 +
 +    AmbariCluster(String name) {
 +        this.name = name;
 +        components = new HashMap<>();
-         urlCreator = new AmbariDynamicServiceURLCreator(this);
++        urlFactory = ServiceURLFactory.newInstance(this);
 +    }
 +
 +    void addServiceConfiguration(String serviceName, String configurationType, ServiceConfiguration serviceConfig) {
 +        if (!serviceConfigurations.keySet().contains(serviceName)) {
 +            serviceConfigurations.put(serviceName, new HashMap<>());
 +        }
 +        serviceConfigurations.get(serviceName).put(configurationType, serviceConfig);
 +    }
 +
 +
 +    void addComponent(AmbariComponent component) {
 +        components.put(component.getName(), component);
 +    }
 +
 +
 +    ServiceConfiguration getServiceConfiguration(String serviceName, String configurationType) {
 +        ServiceConfiguration sc = null;
 +        Map<String, ServiceConfiguration> configs = serviceConfigurations.get(serviceName);
 +        if (configs != null) {
 +            sc = configs.get(configurationType);
 +        }
 +        return sc;
 +    }
 +
 +
 +    Map<String, Map<String, ServiceConfiguration>> getServiceConfigurations() {
 +        return serviceConfigurations;
 +    }
 +
 +
 +    Map<String, AmbariComponent> getComponents() {
 +        return components;
 +    }
 +
 +
 +    AmbariComponent getComponent(String name) {
 +        return components.get(name);
 +    }
 +
 +
 +    @Override
 +    public String getName() {
 +        return name;
 +    }
 +
 +
 +    @Override
 +    public List<String> getServiceURLs(String serviceName) {
 +        List<String> urls = new ArrayList<>();
-         urls.addAll(urlCreator.create(serviceName));
++        urls.addAll(urlFactory.create(serviceName));
 +        return urls;
 +    }
 +
 +
 +    static class ServiceConfiguration {
 +
 +        private String type;
 +        private String version;
 +        private Map<String, String> props;
 +
 +        ServiceConfiguration(String type, String version, Map<String, String> properties) {
 +            this.type = type;
 +            this.version = version;
 +            this.props = properties;
 +        }
 +
 +        public String getVersion() {
 +            return version;
 +        }
 +
 +        public String getType() {
 +            return type;
 +        }
 +
 +        public Map<String, String> getProperties() {
 +            return props;
 +        }
 +    }
 +
 +}

http://git-wip-us.apache.org/repos/asf/knox/blob/e5fd0622/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/AmbariConfigurationMonitor.java
----------------------------------------------------------------------
diff --cc gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/AmbariConfigurationMonitor.java
index c3aa27a,0000000..920b05c7
mode 100644,000000..100644
--- a/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/AmbariConfigurationMonitor.java
+++ b/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/AmbariConfigurationMonitor.java
@@@ -1,525 -1,0 +1,559 @@@
 +/**
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements. See the NOTICE file distributed with this
 + * work for additional information regarding copyright ownership. The ASF
 + * licenses this file to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance with the License.
 + * You may obtain a copy of the License at
 + * <p>
 + * http://www.apache.org/licenses/LICENSE-2.0
 + * <p>
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
 + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
 + * License for the specific language governing permissions and limitations under
 + * the License.
 + */
 +package org.apache.knox.gateway.topology.discovery.ambari;
 +
 +import org.apache.commons.io.FileUtils;
 +import org.apache.knox.gateway.config.GatewayConfig;
 +import org.apache.knox.gateway.i18n.messages.MessagesFactory;
 +import org.apache.knox.gateway.services.security.AliasService;
 +import org.apache.knox.gateway.topology.discovery.ClusterConfigurationMonitor;
 +import org.apache.knox.gateway.topology.discovery.ServiceDiscoveryConfig;
 +
 +import java.io.File;
 +import java.io.FileInputStream;
 +import java.io.FileOutputStream;
 +import java.io.IOException;
 +import java.util.ArrayList;
 +import java.util.Collection;
 +import java.util.HashMap;
 +import java.util.List;
 +import java.util.Map;
 +import java.util.Properties;
 +import java.util.concurrent.locks.ReadWriteLock;
 +import java.util.concurrent.locks.ReentrantReadWriteLock;
 +
 +
 +class AmbariConfigurationMonitor implements ClusterConfigurationMonitor {
 +
 +    private static final String TYPE = "Ambari";
 +
 +    private static final String CLUSTERS_DATA_DIR_NAME = "clusters";
 +
 +    private static final String PERSISTED_FILE_COMMENT = "Generated File. Do Not Edit!";
 +
 +    private static final String PROP_CLUSTER_PREFIX = "cluster.";
 +    private static final String PROP_CLUSTER_SOURCE = PROP_CLUSTER_PREFIX + "source";
 +    private static final String PROP_CLUSTER_NAME   = PROP_CLUSTER_PREFIX + "name";
 +    private static final String PROP_CLUSTER_USER   = PROP_CLUSTER_PREFIX + "user";
 +    private static final String PROP_CLUSTER_ALIAS  = PROP_CLUSTER_PREFIX + "pwd.alias";
 +
 +    static final String INTERVAL_PROPERTY_NAME = "org.apache.knox.gateway.topology.discovery.ambari.monitor.interval";
 +
 +
 +    private static final AmbariServiceDiscoveryMessages log = MessagesFactory.get(AmbariServiceDiscoveryMessages.class);
 +
 +    // Ambari address
 +    //    clusterName -> ServiceDiscoveryConfig
 +    //
 +    Map<String, Map<String, ServiceDiscoveryConfig>> clusterMonitorConfigurations = new HashMap<>();
 +
 +    // Ambari address
 +    //    clusterName
 +    //        configType -> version
 +    //
 +    Map<String, Map<String, Map<String, String>>> ambariClusterConfigVersions = new HashMap<>();
 +
 +    ReadWriteLock configVersionsLock = new ReentrantReadWriteLock();
 +
 +    private List<ConfigurationChangeListener> changeListeners = new ArrayList<>();
 +
 +    private AmbariClientCommon ambariClient;
 +
 +    PollingConfigAnalyzer internalMonitor;
 +
 +    GatewayConfig gatewayConfig = null;
 +
 +    static String getType() {
 +        return TYPE;
 +    }
 +
 +    AmbariConfigurationMonitor(GatewayConfig config, AliasService aliasService) {
 +        this.gatewayConfig   = config;
 +        this.ambariClient    = new AmbariClientCommon(aliasService);
 +        this.internalMonitor = new PollingConfigAnalyzer(this);
 +
 +        // Override the default polling interval if it has been configured
 +        int interval = config.getClusterMonitorPollingInterval(getType());
 +        if (interval > 0) {
 +            setPollingInterval(interval);
 +        }
 +
 +        init();
 +    }
 +
 +    @Override
 +    public void setPollingInterval(int interval) {
 +        internalMonitor.setInterval(interval);
 +    }
 +
 +    private void init() {
 +        loadDiscoveryConfiguration();
 +        loadClusterVersionData();
 +    }
 +
 +    /**
 +     * Load any previously-persisted service discovery configurations.
 +     * This is necessary for checking previously-deployed topologies.
 +     */
 +    private void loadDiscoveryConfiguration() {
 +        File persistenceDir = getPersistenceDir();
 +        if (persistenceDir != null) {
 +            Collection<File> persistedConfigs = FileUtils.listFiles(persistenceDir, new String[]{"conf"}, false);
 +            for (File persisted : persistedConfigs) {
 +                Properties props = new Properties();
++                FileInputStream in = null;
 +                try {
-                     props.load(new FileInputStream(persisted));
++                    in = new FileInputStream(persisted);
++                    props.load(in);
 +
 +                    addDiscoveryConfig(props.getProperty(PROP_CLUSTER_NAME), new ServiceDiscoveryConfig() {
 +                                                            public String getAddress() {
 +                                                                return props.getProperty(PROP_CLUSTER_SOURCE);
 +                                                            }
 +
 +                                                            public String getUser() {
 +                                                                return props.getProperty(PROP_CLUSTER_USER);
 +                                                            }
 +
 +                                                            public String getPasswordAlias() {
 +                                                                return props.getProperty(PROP_CLUSTER_ALIAS);
 +                                                            }
 +                                                        });
 +                } catch (IOException e) {
 +                    log.failedToLoadClusterMonitorServiceDiscoveryConfig(getType(), e);
++                } finally {
++                    if (in != null) {
++                        try {
++                            in.close();
++                        } catch (IOException e) {
++                            //
++                        }
++                    }
 +                }
 +            }
 +        }
 +    }
 +
 +    /**
 +     * Load any previously-persisted cluster configuration version records, so the monitor will check
 +     * previously-deployed topologies against the current cluster configuration.
 +     */
 +    private void loadClusterVersionData() {
 +        File persistenceDir = getPersistenceDir();
 +        if (persistenceDir != null) {
-             Collection<File> persistedConfigs = FileUtils.listFiles(getPersistenceDir(), new String[]{"ver"}, false);
++            Collection<File> persistedConfigs = FileUtils.listFiles(persistenceDir, new String[]{"ver"}, false);
 +            for (File persisted : persistedConfigs) {
 +                Properties props = new Properties();
++                FileInputStream in = null;
 +                try {
-                     props.load(new FileInputStream(persisted));
++                    in = new FileInputStream(persisted);
++                    props.load(in);
 +
 +                    String source = props.getProperty(PROP_CLUSTER_SOURCE);
 +                    String clusterName = props.getProperty(PROP_CLUSTER_NAME);
 +
 +                    Map<String, String> configVersions = new HashMap<>();
 +                    for (String name : props.stringPropertyNames()) {
 +                        if (!name.startsWith(PROP_CLUSTER_PREFIX)) { // Ignore implementation-specific properties
 +                            configVersions.put(name, props.getProperty(name));
 +                        }
 +                    }
 +
 +                    // Map the config versions to the cluster name
 +                    addClusterConfigVersions(source, clusterName, configVersions);
 +
 +                } catch (IOException e) {
 +                    log.failedToLoadClusterMonitorConfigVersions(getType(), e);
++                } finally {
++                    if (in != null) {
++                        try {
++                            in.close();
++                        } catch (IOException e) {
++                            //
++                        }
++                    }
 +                }
 +            }
 +        }
 +    }
 +
 +    private void persistDiscoveryConfiguration(String clusterName, ServiceDiscoveryConfig sdc) {
 +        File persistenceDir = getPersistenceDir();
 +        if (persistenceDir != null) {
 +
 +            Properties props = new Properties();
 +            props.setProperty(PROP_CLUSTER_NAME, clusterName);
 +            props.setProperty(PROP_CLUSTER_SOURCE, sdc.getAddress());
 +
 +            String username = sdc.getUser();
 +            if (username != null) {
 +                props.setProperty(PROP_CLUSTER_USER, username);
 +            }
 +            String pwdAlias = sdc.getPasswordAlias();
 +            if (pwdAlias != null) {
 +                props.setProperty(PROP_CLUSTER_ALIAS, pwdAlias);
 +            }
 +
 +            persist(props, getDiscoveryConfigPersistenceFile(sdc.getAddress(), clusterName));
 +        }
 +    }
 +
 +    private void persistClusterVersionData(String address, String clusterName, Map<String, String> configVersions) {
 +        File persistenceDir = getPersistenceDir();
 +        if (persistenceDir != null) {
 +            Properties props = new Properties();
 +            props.setProperty(PROP_CLUSTER_NAME, clusterName);
 +            props.setProperty(PROP_CLUSTER_SOURCE, address);
 +            for (String name : configVersions.keySet()) {
 +                props.setProperty(name, configVersions.get(name));
 +            }
 +
 +            persist(props, getConfigVersionsPersistenceFile(address, clusterName));
 +        }
 +    }
 +
 +    private void persist(Properties props, File dest) {
++        FileOutputStream out = null;
 +        try {
-             props.store(new FileOutputStream(dest), PERSISTED_FILE_COMMENT);
++            out = new FileOutputStream(dest);
++            props.store(out, PERSISTED_FILE_COMMENT);
++            out.flush();
 +        } catch (Exception e) {
 +            log.failedToPersistClusterMonitorData(getType(), dest.getAbsolutePath(), e);
++        } finally {
++            if (out != null) {
++                try {
++                    out.close();
++                } catch (IOException e) {
++                    //
++                }
++            }
 +        }
 +    }
 +
 +    private File getPersistenceDir() {
 +        File persistenceDir = null;
 +
 +        File dataDir = new File(gatewayConfig.getGatewayDataDir());
 +        if (dataDir.exists()) {
 +            File clustersDir = new File(dataDir, CLUSTERS_DATA_DIR_NAME);
 +            if (!clustersDir.exists()) {
 +                clustersDir.mkdirs();
 +            }
 +            persistenceDir = clustersDir;
 +        }
 +
 +        return persistenceDir;
 +    }
 +
 +    private File getDiscoveryConfigPersistenceFile(String address, String clusterName) {
 +        return getPersistenceFile(address, clusterName, "conf");
 +    }
 +
 +    private File getConfigVersionsPersistenceFile(String address, String clusterName) {
 +        return getPersistenceFile(address, clusterName, "ver");
 +    }
 +
 +    private File getPersistenceFile(String address, String clusterName, String ext) {
 +        String fileName = address.replace(":", "_").replace("/", "_") + "-" + clusterName + "." + ext;
 +        return new File(getPersistenceDir(), fileName);
 +    }
 +
 +    /**
 +     * Add cluster configuration details to the monitor's in-memory record.
 +     *
 +     * @param address        An Ambari instance address.
 +     * @param clusterName    The name of a cluster associated with the Ambari instance.
 +     * @param configVersions A Map of configuration types and their corresponding versions.
 +     */
 +    private void addClusterConfigVersions(String address, String clusterName, Map<String, String> configVersions) {
 +        configVersionsLock.writeLock().lock();
 +        try {
 +            ambariClusterConfigVersions.computeIfAbsent(address, k -> new HashMap<>())
 +                                       .put(clusterName, configVersions);
 +        } finally {
 +            configVersionsLock.writeLock().unlock();
 +        }
 +    }
 +
 +    public void start() {
 +        (new Thread(internalMonitor, "AmbariConfigurationMonitor")).start();
 +    }
 +
 +    public void stop() {
 +        internalMonitor.stop();
 +    }
 +
 +    @Override
 +    public void addListener(ConfigurationChangeListener listener) {
 +        changeListeners.add(listener);
 +    }
 +
 +    /**
 +     * Add discovery configuration details for the specified cluster, so the monitor knows how to connect to check for
 +     * changes.
 +     *
 +     * @param clusterName The name of the cluster.
 +     * @param config      The associated service discovery configuration.
 +     */
 +    void addDiscoveryConfig(String clusterName, ServiceDiscoveryConfig config) {
 +        clusterMonitorConfigurations.computeIfAbsent(config.getAddress(), k -> new HashMap<>()).put(clusterName, config);
 +    }
 +
 +
 +    /**
 +     * Get the service discovery configuration associated with the specified Ambari instance and cluster.
 +     *
 +     * @param address     An Ambari instance address.
 +     * @param clusterName The name of a cluster associated with the Ambari instance.
 +     *
 +     * @return The associated ServiceDiscoveryConfig object.
 +     */
 +    ServiceDiscoveryConfig getDiscoveryConfig(String address, String clusterName) {
 +        ServiceDiscoveryConfig config = null;
 +        if (clusterMonitorConfigurations.containsKey(address)) {
 +            config = clusterMonitorConfigurations.get(address).get(clusterName);
 +        }
 +        return config;
 +    }
 +
 +
 +    /**
 +     * Add cluster configuration data to the monitor, which it will use when determining if configuration has changed.
 +     *
 +     * @param cluster         An AmbariCluster object.
 +     * @param discoveryConfig The discovery configuration associated with the cluster.
 +     */
 +    void addClusterConfigVersions(AmbariCluster cluster, ServiceDiscoveryConfig discoveryConfig) {
 +
 +        String clusterName = cluster.getName();
 +
 +        // Register the cluster discovery configuration for the monitor connections
 +        persistDiscoveryConfiguration(clusterName, discoveryConfig);
 +        addDiscoveryConfig(clusterName, discoveryConfig);
 +
 +        // Build the set of configuration versions
 +        Map<String, String> configVersions = new HashMap<>();
 +        Map<String, Map<String, AmbariCluster.ServiceConfiguration>> serviceConfigs = cluster.getServiceConfigurations();
 +        for (String serviceName : serviceConfigs.keySet()) {
 +            Map<String, AmbariCluster.ServiceConfiguration> configTypeVersionMap = serviceConfigs.get(serviceName);
 +            for (AmbariCluster.ServiceConfiguration config : configTypeVersionMap.values()) {
 +                String configType = config.getType();
 +                String version = config.getVersion();
 +                configVersions.put(configType, version);
 +            }
 +        }
 +
 +        persistClusterVersionData(discoveryConfig.getAddress(), clusterName, configVersions);
 +        addClusterConfigVersions(discoveryConfig.getAddress(), clusterName, configVersions);
 +    }
 +
 +
 +    /**
 +     * Remove the configuration record for the specified Ambari instance and cluster name.
 +     *
 +     * @param address     An Ambari instance address.
 +     * @param clusterName The name of a cluster associated with the Ambari instance.
 +     *
 +     * @return The removed data; A Map of configuration types and their corresponding versions.
 +     */
 +    Map<String, String> removeClusterConfigVersions(String address, String clusterName) {
 +        Map<String, String> result = new HashMap<>();
 +
 +        configVersionsLock.writeLock().lock();
 +        try {
 +            if (ambariClusterConfigVersions.containsKey(address)) {
 +                result.putAll(ambariClusterConfigVersions.get(address).remove(clusterName));
 +            }
 +        } finally {
 +            configVersionsLock.writeLock().unlock();
 +        }
 +
 +        // Delete the associated persisted record
 +        File persisted = getConfigVersionsPersistenceFile(address, clusterName);
 +        if (persisted.exists()) {
 +            persisted.delete();
 +        }
 +
 +        return result;
 +    }
 +
 +    /**
 +     * Get the cluster configuration details for the specified cluster and Ambari instance.
 +     *
 +     * @param address     An Ambari instance address.
 +     * @param clusterName The name of a cluster associated with the Ambari instance.
 +     *
 +     * @return A Map of configuration types and their corresponding versions.
 +     */
 +    Map<String, String> getClusterConfigVersions(String address, String clusterName) {
 +        Map<String, String> result = new HashMap<>();
 +
 +        configVersionsLock.readLock().lock();
 +        try {
 +            if (ambariClusterConfigVersions.containsKey(address)) {
 +                result.putAll(ambariClusterConfigVersions.get(address).get(clusterName));
 +            }
 +        } finally {
 +            configVersionsLock.readLock().unlock();
 +        }
 +
 +        return result;
 +    }
 +
 +
 +    /**
 +     * Get all the clusters the monitor knows about.
 +     *
 +     * @return A Map of Ambari instance addresses to associated cluster names.
 +     */
 +    Map<String, List<String>> getClusterNames() {
 +        Map<String, List<String>> result = new HashMap<>();
 +
 +        configVersionsLock.readLock().lock();
 +        try {
 +            for (String address : ambariClusterConfigVersions.keySet()) {
 +                List<String> clusterNames = new ArrayList<>();
 +                clusterNames.addAll(ambariClusterConfigVersions.get(address).keySet());
 +                result.put(address, clusterNames);
 +            }
 +        } finally {
 +            configVersionsLock.readLock().unlock();
 +        }
 +
 +        return result;
 +
 +    }
 +
 +
 +    /**
 +     * Notify registered change listeners.
 +     *
 +     * @param source      The address of the Ambari instance from which the cluster details were determined.
 +     * @param clusterName The name of the cluster whose configuration details have changed.
 +     */
 +    void notifyChangeListeners(String source, String clusterName) {
 +        for (ConfigurationChangeListener listener : changeListeners) {
 +            listener.onConfigurationChange(source, clusterName);
 +        }
 +    }
 +
 +
 +    /**
 +     * Request the current active configuration version info from Ambari.
 +     *
 +     * @param address     The Ambari instance address.
 +     * @param clusterName The name of the cluster for which the details are desired.
 +     *
 +     * @return A Map of service configuration types and their corresponding versions.
 +     */
 +    Map<String, String> getUpdatedConfigVersions(String address, String clusterName) {
 +        Map<String, String> configVersions = new HashMap<>();
 +
-         Map<String, Map<String, AmbariCluster.ServiceConfiguration>> serviceConfigs =
-                     ambariClient.getActiveServiceConfigurations(clusterName, getDiscoveryConfig(address, clusterName));
++        ServiceDiscoveryConfig sdc = getDiscoveryConfig(address, clusterName);
++        if (sdc != null) {
++            Map<String, Map<String, AmbariCluster.ServiceConfiguration>> serviceConfigs =
++                                                       ambariClient.getActiveServiceConfigurations(clusterName, sdc);
 +
-         for (Map<String, AmbariCluster.ServiceConfiguration> serviceConfig : serviceConfigs.values()) {
-             for (AmbariCluster.ServiceConfiguration config : serviceConfig.values()) {
-                 configVersions.put(config.getType(), config.getVersion());
++            for (Map<String, AmbariCluster.ServiceConfiguration> serviceConfig : serviceConfigs.values()) {
++                for (AmbariCluster.ServiceConfiguration config : serviceConfig.values()) {
++                    configVersions.put(config.getType(), config.getVersion());
++                }
 +            }
 +        }
 +
 +        return configVersions;
 +    }
 +
 +
 +    /**
 +     * The thread that polls Ambari for configuration details for clusters associated with discovered topologies,
 +     * compares them with the current recorded values, and notifies any listeners when differences are discovered.
 +     */
 +    static final class PollingConfigAnalyzer implements Runnable {
 +
 +        private static final int DEFAULT_POLLING_INTERVAL = 60;
 +
 +        // Polling interval in seconds
 +        private int interval = DEFAULT_POLLING_INTERVAL;
 +
 +        private AmbariConfigurationMonitor delegate;
 +
 +        private boolean isActive = false;
 +
 +        PollingConfigAnalyzer(AmbariConfigurationMonitor delegate) {
 +            this.delegate = delegate;
 +            this.interval = Integer.getInteger(INTERVAL_PROPERTY_NAME, PollingConfigAnalyzer.DEFAULT_POLLING_INTERVAL);
 +        }
 +
 +        void setInterval(int interval) {
 +            this.interval = interval;
 +        }
 +
 +
 +        void stop() {
 +            isActive = false;
 +        }
 +
 +        @Override
 +        public void run() {
 +            isActive = true;
 +
 +            log.startedAmbariConfigMonitor(interval);
 +
 +            while (isActive) {
 +                for (Map.Entry<String, List<String>> entry : delegate.getClusterNames().entrySet()) {
 +                    String address = entry.getKey();
 +                    for (String clusterName : entry.getValue()) {
 +                        Map<String, String> configVersions = delegate.getClusterConfigVersions(address, clusterName);
 +                        if (configVersions != null && !configVersions.isEmpty()) {
 +                            Map<String, String> updatedVersions = delegate.getUpdatedConfigVersions(address, clusterName);
 +                            if (updatedVersions != null && !updatedVersions.isEmpty()) {
 +                                boolean configHasChanged = false;
 +
 +                                // If the config sets don't match in size, then something has changed
 +                                if (updatedVersions.size() != configVersions.size()) {
 +                                    configHasChanged = true;
 +                                } else {
 +                                    // Perform the comparison of all the config versions
 +                                    for (Map.Entry<String, String> configVersion : configVersions.entrySet()) {
 +                                        if (!updatedVersions.get(configVersion.getKey()).equals(configVersion.getValue())) {
 +                                            configHasChanged = true;
 +                                            break;
 +                                        }
 +                                    }
 +                                }
 +
 +                                // If a change has occurred, notify the listeners
 +                                if (configHasChanged) {
 +                                    delegate.notifyChangeListeners(address, clusterName);
 +                                }
 +                            }
 +                        }
 +                    }
 +                }
 +
 +                try {
 +                    Thread.sleep(interval * 1000);
 +                } catch (InterruptedException e) {
 +                    // Ignore
 +                }
 +            }
 +        }
 +    }
 +
 +}

http://git-wip-us.apache.org/repos/asf/knox/blob/e5fd0622/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/AmbariDynamicServiceURLCreator.java
----------------------------------------------------------------------
diff --cc gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/AmbariDynamicServiceURLCreator.java
index 3c2269d,0000000..dc4ac49
mode 100644,000000..100644
--- a/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/AmbariDynamicServiceURLCreator.java
+++ b/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/AmbariDynamicServiceURLCreator.java
@@@ -1,151 -1,0 +1,151 @@@
 +/**
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements. See the NOTICE file distributed with this
 + * work for additional information regarding copyright ownership. The ASF
 + * licenses this file to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance with the License.
 + * You may obtain a copy of the License at
 + * <p>
 + * http://www.apache.org/licenses/LICENSE-2.0
 + * <p>
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
 + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
 + * License for the specific language governing permissions and limitations under
 + * the License.
 + */
 +package org.apache.knox.gateway.topology.discovery.ambari;
 +
 +import org.apache.knox.gateway.i18n.messages.MessagesFactory;
 +
 +import java.io.ByteArrayInputStream;
 +import java.io.File;
 +import java.io.FileInputStream;
 +import java.io.IOException;
 +import java.util.ArrayList;
 +import java.util.HashMap;
 +import java.util.List;
 +import java.util.Map;
 +
 +
- class AmbariDynamicServiceURLCreator {
++class AmbariDynamicServiceURLCreator implements ServiceURLCreator {
 +
 +    static final String MAPPING_CONFIG_OVERRIDE_PROPERTY = "org.apache.gateway.topology.discovery.ambari.config";
 +
 +    private AmbariServiceDiscoveryMessages log = MessagesFactory.get(AmbariServiceDiscoveryMessages.class);
 +
 +    private AmbariCluster cluster = null;
 +    private ServiceURLPropertyConfig config;
 +
 +    AmbariDynamicServiceURLCreator(AmbariCluster cluster) {
 +        this.cluster = cluster;
 +
 +        String mappingConfiguration = System.getProperty(MAPPING_CONFIG_OVERRIDE_PROPERTY);
 +        if (mappingConfiguration != null) {
 +            File mappingConfigFile = new File(mappingConfiguration);
 +            if (mappingConfigFile.exists()) {
 +                try {
 +                    config = new ServiceURLPropertyConfig(mappingConfigFile);
 +                    log.loadedComponentConfigMappings(mappingConfigFile.getAbsolutePath());
 +                } catch (Exception e) {
 +                    log.failedToLoadComponentConfigMappings(mappingConfigFile.getAbsolutePath(), e);
 +                }
 +            }
 +        }
 +
 +        // If there is no valid override configured, fall-back to the internal mapping configuration
 +        if (config == null) {
 +            config = new ServiceURLPropertyConfig();
 +        }
 +    }
 +
 +    AmbariDynamicServiceURLCreator(AmbariCluster cluster, File mappingConfiguration) throws IOException {
 +        this.cluster = cluster;
 +        config = new ServiceURLPropertyConfig(new FileInputStream(mappingConfiguration));
 +    }
 +
 +    AmbariDynamicServiceURLCreator(AmbariCluster cluster, String mappings) {
 +        this.cluster = cluster;
 +        config = new ServiceURLPropertyConfig(new ByteArrayInputStream(mappings.getBytes()));
 +    }
 +
-     List<String> create(String serviceName) {
++    public List<String> create(String serviceName) {
 +        List<String> urls = new ArrayList<>();
 +
 +        Map<String, String> placeholderValues = new HashMap<>();
 +        List<String> componentHostnames = new ArrayList<>();
 +        String hostNamePlaceholder = null;
 +
 +        ServiceURLPropertyConfig.URLPattern pattern = config.getURLPattern(serviceName);
 +        if (pattern != null) {
 +            for (String propertyName : pattern.getPlaceholders()) {
 +                ServiceURLPropertyConfig.Property configProperty = config.getConfigProperty(serviceName, propertyName);
 +
 +                String propertyValue = null;
 +                String propertyType = configProperty.getType();
 +                if (ServiceURLPropertyConfig.Property.TYPE_SERVICE.equals(propertyType)) {
 +                    log.lookingUpServiceConfigProperty(configProperty.getService(), configProperty.getServiceConfig(), configProperty.getValue());
 +                    AmbariCluster.ServiceConfiguration svcConfig =
 +                        cluster.getServiceConfiguration(configProperty.getService(), configProperty.getServiceConfig());
 +                    if (svcConfig != null) {
 +                        propertyValue = svcConfig.getProperties().get(configProperty.getValue());
 +                    }
 +                } else if (ServiceURLPropertyConfig.Property.TYPE_COMPONENT.equals(propertyType)) {
 +                    String compName = configProperty.getComponent();
 +                    if (compName != null) {
 +                        AmbariComponent component = cluster.getComponent(compName);
 +                        if (component != null) {
 +                            if (ServiceURLPropertyConfig.Property.PROP_COMP_HOSTNAME.equals(configProperty.getValue())) {
 +                                log.lookingUpComponentHosts(compName);
 +                                componentHostnames.addAll(component.getHostNames());
 +                                hostNamePlaceholder = propertyName; // Remember the host name placeholder
 +                            } else {
 +                                log.lookingUpComponentConfigProperty(compName, configProperty.getValue());
 +                                propertyValue = component.getConfigProperty(configProperty.getValue());
 +                            }
 +                        }
 +                    }
 +                } else { // Derived property
 +                    log.handlingDerivedProperty(serviceName, configProperty.getType(), configProperty.getName());
 +                    ServiceURLPropertyConfig.Property p = config.getConfigProperty(serviceName, configProperty.getName());
 +                    propertyValue = p.getValue();
 +                    if (propertyValue == null) {
 +                        if (p.getConditionHandler() != null) {
 +                            propertyValue = p.getConditionHandler().evaluate(config, cluster);
 +                        }
 +                    }
 +                }
 +
 +                log.determinedPropertyValue(configProperty.getName(), propertyValue);
 +                placeholderValues.put(configProperty.getName(), propertyValue);
 +            }
 +
 +            // For patterns with a placeholder value for the hostname (e.g., multiple URL scenarios)
 +            if (!componentHostnames.isEmpty()) {
 +                for (String componentHostname : componentHostnames) {
 +                    String url = pattern.get().replace("{" + hostNamePlaceholder + "}", componentHostname);
 +                    urls.add(createURL(url, placeholderValues));
 +                }
 +            } else { // Single URL result case
 +                urls.add(createURL(pattern.get(), placeholderValues));
 +            }
 +        }
 +
 +        return urls;
 +    }
 +
 +    private String createURL(String pattern, Map<String, String> placeholderValues) {
 +        String url = null;
 +        if (pattern != null) {
 +            url = pattern;
 +            for (String placeHolder : placeholderValues.keySet()) {
 +                String value = placeholderValues.get(placeHolder);
 +                if (value != null) {
 +                    url = url.replace("{" + placeHolder + "}", value);
 +                }
 +            }
 +        }
 +        return url;
 +    }
 +
 +}

http://git-wip-us.apache.org/repos/asf/knox/blob/e5fd0622/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/PropertyEqualsHandler.java
----------------------------------------------------------------------
diff --cc gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/PropertyEqualsHandler.java
index 4044d56,0000000..0dfab36
mode 100644,000000..100644
--- a/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/PropertyEqualsHandler.java
+++ b/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/PropertyEqualsHandler.java
@@@ -1,76 -1,0 +1,88 @@@
 +/**
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements. See the NOTICE file distributed with this
 + * work for additional information regarding copyright ownership. The ASF
 + * licenses this file to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance with the License.
 + * You may obtain a copy of the License at
 + * <p>
 + * http://www.apache.org/licenses/LICENSE-2.0
 + * <p>
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
 + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
 + * License for the specific language governing permissions and limitations under
 + * the License.
 + */
 +package org.apache.knox.gateway.topology.discovery.ambari;
 +
 +
 +class PropertyEqualsHandler implements ConditionalValueHandler {
 +
 +    private String serviceName                        = null;
 +    private String propertyName                       = null;
 +    private String propertyValue                      = null;
 +    private ConditionalValueHandler affirmativeResult = null;
 +    private ConditionalValueHandler negativeResult    = null;
 +
 +    PropertyEqualsHandler(String                  serviceName,
 +                          String                  propertyName,
 +                          String                  propertyValue,
 +                          ConditionalValueHandler affirmativeResult,
 +                          ConditionalValueHandler negativeResult) {
 +        this.serviceName       = serviceName;
 +        this.propertyName      = propertyName;
 +        this.propertyValue     = propertyValue;
 +        this.affirmativeResult = affirmativeResult;
 +        this.negativeResult    = negativeResult;
 +    }
 +
 +    @Override
 +    public String evaluate(ServiceURLPropertyConfig config, AmbariCluster cluster) {
 +        String result = null;
 +
 +        ServiceURLPropertyConfig.Property p = config.getConfigProperty(serviceName, propertyName);
 +        if (p != null) {
 +            String value = getActualPropertyValue(cluster, p);
-             if (propertyValue.equals(value)) {
-                 result = affirmativeResult.evaluate(config, cluster);
-             } else if (negativeResult != null) {
-                 result = negativeResult.evaluate(config, cluster);
++            if (propertyValue == null) {
++                // If the property value isn't specified, then we're just checking if the property is set with any value
++                if (value != null) {
++                    // So, if there is a value in the config, respond with the affirmative
++                    result = affirmativeResult.evaluate(config, cluster);
++                } else if (negativeResult != null) {
++                    result = negativeResult.evaluate(config, cluster);
++                }
++            }
++
++            if (result == null) {
++                if (propertyValue.equals(value)) {
++                    result = affirmativeResult.evaluate(config, cluster);
++                } else if (negativeResult != null) {
++                    result = negativeResult.evaluate(config, cluster);
++                }
 +            }
 +
 +            // Check if the result is a reference to a local derived property
 +            ServiceURLPropertyConfig.Property derived = config.getConfigProperty(serviceName, result);
 +            if (derived != null) {
 +                result = getActualPropertyValue(cluster, derived);
 +            }
 +        }
 +
 +        return result;
 +    }
 +
 +    private String getActualPropertyValue(AmbariCluster cluster, ServiceURLPropertyConfig.Property property) {
 +        String value = null;
 +        String propertyType = property.getType();
 +        if (ServiceURLPropertyConfig.Property.TYPE_COMPONENT.equals(propertyType)) {
 +            AmbariComponent component = cluster.getComponent(property.getComponent());
 +            if (component != null) {
 +                value = component.getConfigProperty(property.getValue());
 +            }
 +        } else if (ServiceURLPropertyConfig.Property.TYPE_SERVICE.equals(propertyType)) {
 +            value = cluster.getServiceConfiguration(property.getService(), property.getServiceConfig()).getProperties().get(property.getValue());
 +        }
 +        return value;
 +    }
 +}

http://git-wip-us.apache.org/repos/asf/knox/blob/e5fd0622/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/ServiceURLPropertyConfig.java
----------------------------------------------------------------------
diff --cc gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/ServiceURLPropertyConfig.java
index 47b20e9,0000000..9f3da3d
mode 100644,000000..100644
--- a/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/ServiceURLPropertyConfig.java
+++ b/gateway-discovery-ambari/src/main/java/org/apache/knox/gateway/topology/discovery/ambari/ServiceURLPropertyConfig.java
@@@ -1,324 -1,0 +1,329 @@@
 +/**
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements. See the NOTICE file distributed with this
 + * work for additional information regarding copyright ownership. The ASF
 + * licenses this file to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance with the License.
 + * You may obtain a copy of the License at
 + * <p>
 + * http://www.apache.org/licenses/LICENSE-2.0
 + * <p>
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
 + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
 + * License for the specific language governing permissions and limitations under
 + * the License.
 + */
 +package org.apache.knox.gateway.topology.discovery.ambari;
 +
 +import org.apache.knox.gateway.i18n.messages.MessagesFactory;
 +import org.apache.knox.gateway.util.XmlUtils;
 +import org.w3c.dom.Document;
 +import org.w3c.dom.NamedNodeMap;
 +import org.w3c.dom.Node;
 +import org.w3c.dom.NodeList;
 +
 +import javax.xml.xpath.XPath;
 +import javax.xml.xpath.XPathConstants;
 +import javax.xml.xpath.XPathExpression;
 +import javax.xml.xpath.XPathExpressionException;
 +import javax.xml.xpath.XPathFactory;
 +import java.io.File;
 +import java.io.FileInputStream;
 +import java.io.IOException;
 +import java.io.InputStream;
 +import java.util.ArrayList;
 +import java.util.HashMap;
 +import java.util.List;
 +import java.util.Map;
 +import java.util.regex.Matcher;
 +import java.util.regex.Pattern;
 +
 +/**
 + * Service URL pattern mapping configuration model.
 + */
 +class ServiceURLPropertyConfig {
 +
 +    private static final AmbariServiceDiscoveryMessages log = MessagesFactory.get(AmbariServiceDiscoveryMessages.class);
 +
 +    private static final String ATTR_NAME = "name";
 +
 +    private static XPathExpression SERVICE_URL_PATTERN_MAPPINGS;
 +    private static XPathExpression URL_PATTERN;
 +    private static XPathExpression PROPERTIES;
 +    static {
 +        XPath xpath = XPathFactory.newInstance().newXPath();
 +        try {
 +            SERVICE_URL_PATTERN_MAPPINGS = xpath.compile("/service-discovery-url-mappings/service");
 +            URL_PATTERN                  = xpath.compile("url-pattern/text()");
 +            PROPERTIES                   = xpath.compile("properties/property");
 +        } catch (XPathExpressionException e) {
 +            e.printStackTrace();
 +        }
 +    }
 +
 +    private static final String DEFAULT_SERVICE_URL_MAPPINGS = "ambari-service-discovery-url-mappings.xml";
 +
 +    private Map<String, URLPattern> urlPatterns = new HashMap<>();
 +
 +    private Map<String, Map<String, Property>> properties = new HashMap<>();
 +
 +
 +    /**
 +     * The default service URL pattern to property mapping configuration will be used.
 +     */
 +    ServiceURLPropertyConfig() {
 +        this(ServiceURLPropertyConfig.class.getClassLoader().getResourceAsStream(DEFAULT_SERVICE_URL_MAPPINGS));
 +    }
 +
 +    /**
 +     * The default service URL pattern to property mapping configuration will be used.
 +     */
 +    ServiceURLPropertyConfig(File mappingConfigurationFile) throws Exception {
 +        this(new FileInputStream(mappingConfigurationFile));
 +    }
 +
 +    /**
 +     *
 +     * @param source An InputStream for the XML content
 +     */
 +    ServiceURLPropertyConfig(InputStream source) {
 +        // Parse the XML, and build the model
 +        try {
 +            Document doc = XmlUtils.readXml(source);
 +
 +            NodeList serviceNodes =
 +                    (NodeList) SERVICE_URL_PATTERN_MAPPINGS.evaluate(doc, XPathConstants.NODESET);
 +            for (int i=0; i < serviceNodes.getLength(); i++) {
 +                Node serviceNode = serviceNodes.item(i);
 +                String serviceName = serviceNode.getAttributes().getNamedItem(ATTR_NAME).getNodeValue();
 +                properties.put(serviceName, new HashMap<String, Property>());
 +
 +                Node urlPatternNode = (Node) URL_PATTERN.evaluate(serviceNode, XPathConstants.NODE);
 +                if (urlPatternNode != null) {
 +                    urlPatterns.put(serviceName, new URLPattern(urlPatternNode.getNodeValue()));
 +                }
 +
 +                NodeList propertiesNode = (NodeList) PROPERTIES.evaluate(serviceNode, XPathConstants.NODESET);
 +                if (propertiesNode != null) {
 +                    processProperties(serviceName, propertiesNode);
 +                }
 +            }
 +        } catch (Exception e) {
 +            log.failedToLoadServiceDiscoveryURLDefConfiguration(e);
 +        } finally {
 +            try {
 +                source.close();
 +            } catch (IOException e) {
 +                // Ignore
 +            }
 +        }
 +    }
 +
 +    private void processProperties(String serviceName, NodeList propertyNodes) {
 +        for (int i = 0; i < propertyNodes.getLength(); i++) {
 +            Property p = Property.createProperty(serviceName, propertyNodes.item(i));
 +            properties.get(serviceName).put(p.getName(), p);
 +        }
 +    }
 +
 +    URLPattern getURLPattern(String service) {
 +        return urlPatterns.get(service);
 +    }
 +
 +    Property getConfigProperty(String service, String property) {
 +        return properties.get(service).get(property);
 +    }
 +
 +    static class URLPattern {
 +        String pattern;
 +        List<String> placeholders = new ArrayList<>();
 +
 +        URLPattern(String pattern) {
 +            this.pattern = pattern;
 +
 +            final Pattern regex = Pattern.compile("\\{(.*?)}", Pattern.DOTALL);
 +            final Matcher matcher = regex.matcher(pattern);
 +            while( matcher.find() ){
 +                placeholders.add(matcher.group(1));
 +            }
 +        }
 +
 +        String get() {return pattern; }
 +        List<String> getPlaceholders() {
 +            return placeholders;
 +        }
 +    }
 +
 +    static class Property {
 +        static final String TYPE_SERVICE   = "SERVICE";
 +        static final String TYPE_COMPONENT = "COMPONENT";
 +        static final String TYPE_DERIVED   = "DERIVED";
 +
 +        static final String PROP_COMP_HOSTNAME = "component.host.name";
 +
 +        static final String ATTR_NAME     = "name";
 +        static final String ATTR_PROPERTY = "property";
 +        static final String ATTR_VALUE    = "value";
 +
 +        static XPathExpression HOSTNAME;
 +        static XPathExpression SERVICE_CONFIG;
 +        static XPathExpression COMPONENT;
 +        static XPathExpression CONFIG_PROPERTY;
 +        static XPathExpression IF;
 +        static XPathExpression THEN;
 +        static XPathExpression ELSE;
 +        static XPathExpression TEXT;
 +        static {
 +            XPath xpath = XPathFactory.newInstance().newXPath();
 +            try {
 +                HOSTNAME        = xpath.compile("hostname");
 +                SERVICE_CONFIG  = xpath.compile("service-config");
 +                COMPONENT       = xpath.compile("component");
 +                CONFIG_PROPERTY = xpath.compile("config-property");
 +                IF              = xpath.compile("if");
 +                THEN            = xpath.compile("then");
 +                ELSE            = xpath.compile("else");
 +                TEXT            = xpath.compile("text()");
 +            } catch (XPathExpressionException e) {
 +                e.printStackTrace();
 +            }
 +        }
 +
 +
 +        String type;
 +        String name;
 +        String component;
 +        String service;
 +        String serviceConfig;
 +        String value;
 +        ConditionalValueHandler conditionHandler = null;
 +
 +        private Property(String type,
 +                         String propertyName,
 +                         String component,
 +                         String service,
 +                         String configType,
 +                         String value,
 +                         ConditionalValueHandler pch) {
 +            this.type = type;
 +            this.name = propertyName;
 +            this.service = service;
 +            this.component = component;
 +            this.serviceConfig = configType;
 +            this.value = value;
 +            conditionHandler = pch;
 +        }
 +
 +        static Property createProperty(String serviceName, Node propertyNode) {
 +            String propertyName = propertyNode.getAttributes().getNamedItem(ATTR_NAME).getNodeValue();
 +            String propertyType = null;
 +            String serviceType = null;
 +            String configType = null;
 +            String componentType = null;
 +            String value = null;
 +            ConditionalValueHandler pch = null;
 +
 +            try {
 +                Node hostNameNode = (Node) HOSTNAME.evaluate(propertyNode, XPathConstants.NODE);
 +                if (hostNameNode != null) {
 +                    value = PROP_COMP_HOSTNAME;
 +                }
 +
 +                // Check for a service-config node
 +                Node scNode = (Node) SERVICE_CONFIG.evaluate(propertyNode, XPathConstants.NODE);
 +                if (scNode != null) {
 +                    // Service config property
 +                    propertyType = Property.TYPE_SERVICE;
 +                    serviceType = scNode.getAttributes().getNamedItem(ATTR_NAME).getNodeValue();
 +                    Node scTextNode = (Node) TEXT.evaluate(scNode, XPathConstants.NODE);
 +                    configType = scTextNode.getNodeValue();
 +                } else { // If not service-config node, check for a component config node
 +                    Node cNode = (Node) COMPONENT.evaluate(propertyNode, XPathConstants.NODE);
 +                    if (cNode != null) {
 +                        // Component config property
 +                        propertyType = Property.TYPE_COMPONENT;
 +                        componentType = cNode.getFirstChild().getNodeValue();
 +                        Node cTextNode = (Node) TEXT.evaluate(cNode, XPathConstants.NODE);
 +                        configType = cTextNode.getNodeValue();
 +                        componentType = cTextNode.getNodeValue();
 +                    }
 +                }
 +
 +                // Check for a config property node
 +                Node cpNode = (Node) CONFIG_PROPERTY.evaluate(propertyNode, XPathConstants.NODE);
 +                if (cpNode != null) {
 +                    // Check for a condition element
 +                    Node ifNode = (Node) IF.evaluate(cpNode, XPathConstants.NODE);
 +                    if (ifNode != null) {
 +                        propertyType = TYPE_DERIVED;
 +                        pch = getConditionHandler(serviceName, ifNode);
 +                    } else {
 +                        Node cpTextNode = (Node) TEXT.evaluate(cpNode, XPathConstants.NODE);
 +                        value = cpTextNode.getNodeValue();
 +                    }
 +                }
 +            } catch (Exception e) {
 +                e.printStackTrace();
 +            }
 +
 +            // Create and return the property representation
 +            return new Property(propertyType, propertyName, componentType, serviceType, configType, value, pch);
 +        }
 +
 +        private static ConditionalValueHandler getConditionHandler(String serviceName, Node ifNode) throws Exception {
 +            ConditionalValueHandler result = null;
 +
 +            if (ifNode != null) {
 +                NamedNodeMap attrs = ifNode.getAttributes();
 +                String comparisonPropName = attrs.getNamedItem(ATTR_PROPERTY).getNodeValue();
-                 String comparisonValue = attrs.getNamedItem(ATTR_VALUE).getNodeValue();
++
++                String comparisonValue = null;
++                Node valueNode = attrs.getNamedItem(ATTR_VALUE);
++                if (valueNode != null) {
++                    comparisonValue = attrs.getNamedItem(ATTR_VALUE).getNodeValue();
++                }
 +
 +                ConditionalValueHandler affirmativeResult = null;
 +                Node thenNode = (Node) THEN.evaluate(ifNode, XPathConstants.NODE);
 +                if (thenNode != null) {
 +                    Node subIfNode = (Node) IF.evaluate(thenNode, XPathConstants.NODE);
 +                    if (subIfNode != null) {
 +                        affirmativeResult = getConditionHandler(serviceName, subIfNode);
 +                    } else {
 +                        affirmativeResult = new SimpleValueHandler(thenNode.getFirstChild().getNodeValue());
 +                    }
 +                }
 +
 +                ConditionalValueHandler negativeResult = null;
 +                Node elseNode = (Node) ELSE.evaluate(ifNode, XPathConstants.NODE);
 +                if (elseNode != null) {
 +                    Node subIfNode = (Node) IF.evaluate(elseNode, XPathConstants.NODE);
 +                    if (subIfNode != null) {
 +                        negativeResult = getConditionHandler(serviceName, subIfNode);
 +                    } else {
 +                        negativeResult = new SimpleValueHandler(elseNode.getFirstChild().getNodeValue());
 +                    }
 +                }
 +
 +                result = new PropertyEqualsHandler(serviceName,
 +                        comparisonPropName,
 +                        comparisonValue,
 +                        affirmativeResult,
 +                        negativeResult);
 +            }
 +
 +            return result;
 +        }
 +
 +        String getType() { return type; }
 +        String getName() { return name; }
 +        String getComponent() { return component; }
 +        String getService() { return service; }
 +        String getServiceConfig() { return serviceConfig; }
 +        String getValue() {
 +            return value;
 +        }
 +        ConditionalValueHandler getConditionHandler() { return conditionHandler; }
 +    }
 +}


[06/16] knox git commit: KNOX-1043

Posted by mo...@apache.org.
KNOX-1043


Project: http://git-wip-us.apache.org/repos/asf/knox/repo
Commit: http://git-wip-us.apache.org/repos/asf/knox/commit/348c4d7d
Tree: http://git-wip-us.apache.org/repos/asf/knox/tree/348c4d7d
Diff: http://git-wip-us.apache.org/repos/asf/knox/diff/348c4d7d

Branch: refs/heads/KNOX-998-Package_Restructuring
Commit: 348c4d7def61f6296a7cf192ed00e38a8e5c3714
Parents: a438bcc
Author: Phil Zampino <pz...@gmail.com>
Authored: Thu Dec 21 11:49:51 2017 -0500
Committer: Phil Zampino <pz...@apache.org>
Committed: Wed Jan 3 15:39:36 2018 -0500

----------------------------------------------------------------------
 .../discovery/ambari/AmbariCluster.java         |   6 +-
 .../ambari/AmbariDynamicServiceURLCreator.java  |   4 +-
 .../discovery/ambari/PropertyEqualsHandler.java |  20 +++-
 .../discovery/ambari/ServiceURLCreator.java     |  32 +++++
 .../discovery/ambari/ServiceURLFactory.java     |  75 ++++++++++++
 .../ambari/ServiceURLPropertyConfig.java        |   7 +-
 .../discovery/ambari/WebHdfsUrlCreator.java     |  84 ++++++++++++++
 .../ambari-service-discovery-url-mappings.xml   |  24 ++--
 .../AmbariDynamicServiceURLCreatorTest.java     | 116 +++++++++++++------
 9 files changed, 311 insertions(+), 57 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/knox/blob/348c4d7d/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariCluster.java
----------------------------------------------------------------------
diff --git a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariCluster.java b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariCluster.java
index 1d308cc..2dff181 100644
--- a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariCluster.java
+++ b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariCluster.java
@@ -27,7 +27,7 @@ class AmbariCluster implements ServiceDiscovery.Cluster {
 
     private String name = null;
 
-    private AmbariDynamicServiceURLCreator urlCreator;
+    private ServiceURLFactory urlFactory;
 
     private Map<String, Map<String, ServiceConfiguration>> serviceConfigurations = new HashMap<>();
 
@@ -37,7 +37,7 @@ class AmbariCluster implements ServiceDiscovery.Cluster {
     AmbariCluster(String name) {
         this.name = name;
         components = new HashMap<>();
-        urlCreator = new AmbariDynamicServiceURLCreator(this);
+        urlFactory = ServiceURLFactory.newInstance(this);
     }
 
     void addServiceConfiguration(String serviceName, String configurationType, ServiceConfiguration serviceConfig) {
@@ -87,7 +87,7 @@ class AmbariCluster implements ServiceDiscovery.Cluster {
     @Override
     public List<String> getServiceURLs(String serviceName) {
         List<String> urls = new ArrayList<>();
-        urls.addAll(urlCreator.create(serviceName));
+        urls.addAll(urlFactory.create(serviceName));
         return urls;
     }
 

http://git-wip-us.apache.org/repos/asf/knox/blob/348c4d7d/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariDynamicServiceURLCreator.java
----------------------------------------------------------------------
diff --git a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariDynamicServiceURLCreator.java b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariDynamicServiceURLCreator.java
index ed5d3e7..c35ed66 100644
--- a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariDynamicServiceURLCreator.java
+++ b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariDynamicServiceURLCreator.java
@@ -28,7 +28,7 @@ import java.util.List;
 import java.util.Map;
 
 
-class AmbariDynamicServiceURLCreator {
+class AmbariDynamicServiceURLCreator implements ServiceURLCreator {
 
     static final String MAPPING_CONFIG_OVERRIDE_PROPERTY = "org.apache.gateway.topology.discovery.ambari.config";
 
@@ -69,7 +69,7 @@ class AmbariDynamicServiceURLCreator {
         config = new ServiceURLPropertyConfig(new ByteArrayInputStream(mappings.getBytes()));
     }
 
-    List<String> create(String serviceName) {
+    public List<String> create(String serviceName) {
         List<String> urls = new ArrayList<>();
 
         Map<String, String> placeholderValues = new HashMap<>();

http://git-wip-us.apache.org/repos/asf/knox/blob/348c4d7d/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/PropertyEqualsHandler.java
----------------------------------------------------------------------
diff --git a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/PropertyEqualsHandler.java b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/PropertyEqualsHandler.java
index 642a676..e5f1c68 100644
--- a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/PropertyEqualsHandler.java
+++ b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/PropertyEqualsHandler.java
@@ -44,10 +44,22 @@ class PropertyEqualsHandler implements ConditionalValueHandler {
         ServiceURLPropertyConfig.Property p = config.getConfigProperty(serviceName, propertyName);
         if (p != null) {
             String value = getActualPropertyValue(cluster, p);
-            if (propertyValue.equals(value)) {
-                result = affirmativeResult.evaluate(config, cluster);
-            } else if (negativeResult != null) {
-                result = negativeResult.evaluate(config, cluster);
+            if (propertyValue == null) {
+                // If the property value isn't specified, then we're just checking if the property is set with any value
+                if (value != null) {
+                    // So, if there is a value in the config, respond with the affirmative
+                    result = affirmativeResult.evaluate(config, cluster);
+                } else if (negativeResult != null) {
+                    result = negativeResult.evaluate(config, cluster);
+                }
+            }
+
+            if (result == null) {
+                if (propertyValue.equals(value)) {
+                    result = affirmativeResult.evaluate(config, cluster);
+                } else if (negativeResult != null) {
+                    result = negativeResult.evaluate(config, cluster);
+                }
             }
 
             // Check if the result is a reference to a local derived property

http://git-wip-us.apache.org/repos/asf/knox/blob/348c4d7d/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLCreator.java
----------------------------------------------------------------------
diff --git a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLCreator.java b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLCreator.java
new file mode 100644
index 0000000..8295155
--- /dev/null
+++ b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLCreator.java
@@ -0,0 +1,32 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.gateway.topology.discovery.ambari;
+
+import java.util.List;
+
+public interface ServiceURLCreator {
+
+  /**
+   * Creates one or more cluster-specific URLs for the specified service.
+   *
+   * @param service The service identifier.
+   *
+   * @return A List of created URL strings; the list may be empty.
+   */
+  List<String> create(String service);
+
+}

http://git-wip-us.apache.org/repos/asf/knox/blob/348c4d7d/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLFactory.java
----------------------------------------------------------------------
diff --git a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLFactory.java b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLFactory.java
new file mode 100644
index 0000000..fa9f89a
--- /dev/null
+++ b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLFactory.java
@@ -0,0 +1,75 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.gateway.topology.discovery.ambari;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Factory for creating cluster-specific service URLs.
+ */
+public class ServiceURLFactory {
+
+  private Map<String, ServiceURLCreator> urlCreators = new HashMap<>();
+
+  private ServiceURLCreator defaultURLCreator = null;
+
+
+  private ServiceURLFactory(AmbariCluster cluster) {
+    // Default URL creator
+    defaultURLCreator = new AmbariDynamicServiceURLCreator(cluster);
+
+    // Custom (internal) URL creators
+    urlCreators.put("WEBHDFS", new WebHdfsUrlCreator(cluster));
+  }
+
+
+  /**
+   * Create a new factory for the specified cluster.
+   *
+   * @param cluster The cluster.
+   *
+   * @return A ServiceURLFactory instance.
+   */
+  public static ServiceURLFactory newInstance(AmbariCluster cluster) {
+    return new ServiceURLFactory(cluster);
+  }
+
+
+  /**
+   * Create one or more cluster-specific URLs for the specified service.
+   *
+   * @param service The service.
+   *
+   * @return A List of service URL strings; the list may be empty.
+   */
+  public List<String> create(String service) {
+    List<String> urls = new ArrayList<>();
+
+    ServiceURLCreator creator = urlCreators.get(service);
+    if (creator == null) {
+      creator = defaultURLCreator;
+    }
+
+    urls.addAll(creator.create(service));
+
+    return urls;
+  }
+
+}

http://git-wip-us.apache.org/repos/asf/knox/blob/348c4d7d/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLPropertyConfig.java
----------------------------------------------------------------------
diff --git a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLPropertyConfig.java b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLPropertyConfig.java
index deb5bb3..d4be904 100644
--- a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLPropertyConfig.java
+++ b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/ServiceURLPropertyConfig.java
@@ -277,7 +277,12 @@ class ServiceURLPropertyConfig {
             if (ifNode != null) {
                 NamedNodeMap attrs = ifNode.getAttributes();
                 String comparisonPropName = attrs.getNamedItem(ATTR_PROPERTY).getNodeValue();
-                String comparisonValue = attrs.getNamedItem(ATTR_VALUE).getNodeValue();
+
+                String comparisonValue = null;
+                Node valueNode = attrs.getNamedItem(ATTR_VALUE);
+                if (valueNode != null) {
+                    comparisonValue = attrs.getNamedItem(ATTR_VALUE).getNodeValue();
+                }
 
                 ConditionalValueHandler affirmativeResult = null;
                 Node thenNode = (Node) THEN.evaluate(ifNode, XPathConstants.NODE);

http://git-wip-us.apache.org/repos/asf/knox/blob/348c4d7d/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/WebHdfsUrlCreator.java
----------------------------------------------------------------------
diff --git a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/WebHdfsUrlCreator.java b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/WebHdfsUrlCreator.java
new file mode 100644
index 0000000..1d11c66
--- /dev/null
+++ b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/WebHdfsUrlCreator.java
@@ -0,0 +1,84 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.gateway.topology.discovery.ambari;
+
+import org.apache.hadoop.gateway.i18n.messages.MessagesFactory;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * A ServiceURLCreator implementation for WEBHDFS.
+ */
+public class WebHdfsUrlCreator implements ServiceURLCreator {
+
+  private static final String SERVICE = "WEBHDFS";
+
+  private AmbariServiceDiscoveryMessages log = MessagesFactory.get(AmbariServiceDiscoveryMessages.class);
+
+  private AmbariCluster cluster = null;
+
+  WebHdfsUrlCreator(AmbariCluster cluster) {
+    this.cluster = cluster;
+  }
+
+  @Override
+  public List<String> create(String service) {
+    List<String> urls = new ArrayList<>();
+
+    if (SERVICE.equals(service)) {
+      AmbariCluster.ServiceConfiguration sc = cluster.getServiceConfiguration("HDFS", "hdfs-site");
+
+      // First, check if it's HA config
+      String nameServices = null;
+      AmbariComponent nameNodeComp = cluster.getComponent("NAMENODE");
+      if (nameNodeComp != null) {
+        nameServices = nameNodeComp.getConfigProperty("dfs.nameservices");
+      }
+
+      if (nameServices != null && !nameServices.isEmpty()) {
+        // If it is an HA configuration
+        Map<String, String> props = sc.getProperties();
+
+        // Name node HTTP addresses are defined as properties of the form:
+        //      dfs.namenode.http-address.<NAMESERVICES>.nn<INDEX>
+        // So, this iterates over the nn<INDEX> properties until there is no such property (since it cannot be known how
+        // many are defined by any other means).
+        int i = 1;
+        String propertyValue = getHANameNodeHttpAddress(props, nameServices, i++);
+        while (propertyValue != null) {
+          urls.add(createURL(propertyValue));
+          propertyValue = getHANameNodeHttpAddress(props, nameServices, i++);
+        }
+      } else { // If it's not an HA configuration, get the single name node HTTP address
+        urls.add(createURL(sc.getProperties().get("dfs.namenode.http-address")));
+      }
+    }
+
+    return urls;
+  }
+
+  private static String getHANameNodeHttpAddress(Map<String, String> props, String nameServices, int index) {
+    return props.get("dfs.namenode.http-address." + nameServices + ".nn" + index);
+  }
+
+  private static String createURL(String address) {
+    return "http://" + address + "/webhdfs";
+  }
+
+}

http://git-wip-us.apache.org/repos/asf/knox/blob/348c4d7d/gateway-discovery-ambari/src/main/resources/ambari-service-discovery-url-mappings.xml
----------------------------------------------------------------------
diff --git a/gateway-discovery-ambari/src/main/resources/ambari-service-discovery-url-mappings.xml b/gateway-discovery-ambari/src/main/resources/ambari-service-discovery-url-mappings.xml
index 8953b8d..23fbf9c 100644
--- a/gateway-discovery-ambari/src/main/resources/ambari-service-discovery-url-mappings.xml
+++ b/gateway-discovery-ambari/src/main/resources/ambari-service-discovery-url-mappings.xml
@@ -24,12 +24,24 @@
 <service-discovery-url-mappings>
 
     <service name="NAMENODE">
-        <url-pattern>hdfs://{DFS_NAMENODE_RPC_ADDRESS}</url-pattern>
+        <url-pattern>hdfs://{DFS_NAMENODE_ADDRESS}</url-pattern>
         <properties>
             <property name="DFS_NAMENODE_RPC_ADDRESS">
                 <component>NAMENODE</component>
                 <config-property>dfs.namenode.rpc-address</config-property>
             </property>
+            <property name="DFS_NAMESERVICES">
+                <component>NAMENODE</component>
+                <config-property>dfs.nameservices</config-property>
+            </property>
+            <property name="DFS_NAMENODE_ADDRESS">
+                <config-property>
+                    <if property="DFS_NAMESERVICES">
+                        <then>DFS_NAMESERVICES</then>
+                        <else>DFS_NAMENODE_RPC_ADDRESS</else>
+                    </if>
+                </config-property>
+            </property>
         </properties>
     </service>
 
@@ -43,16 +55,6 @@
         </properties>
     </service>
 
-    <service name="WEBHDFS">
-        <url-pattern>http://{WEBHDFS_ADDRESS}/webhdfs</url-pattern>
-        <properties>
-            <property name="WEBHDFS_ADDRESS">
-                <service-config name="HDFS">hdfs-site</service-config>
-                <config-property>dfs.namenode.http-address</config-property>
-            </property>
-        </properties>
-    </service>
-
     <service name="WEBHCAT">
         <url-pattern>http://{HOST}:{PORT}/templeton</url-pattern>
         <properties>

http://git-wip-us.apache.org/repos/asf/knox/blob/348c4d7d/gateway-discovery-ambari/src/test/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariDynamicServiceURLCreatorTest.java
----------------------------------------------------------------------
diff --git a/gateway-discovery-ambari/src/test/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariDynamicServiceURLCreatorTest.java b/gateway-discovery-ambari/src/test/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariDynamicServiceURLCreatorTest.java
index dd35dbb..63043d3 100644
--- a/gateway-discovery-ambari/src/test/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariDynamicServiceURLCreatorTest.java
+++ b/gateway-discovery-ambari/src/test/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariDynamicServiceURLCreatorTest.java
@@ -34,6 +34,7 @@ import java.util.Map;
 import static junit.framework.TestCase.assertTrue;
 import static junit.framework.TestCase.fail;
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotNull;
 
 
@@ -112,6 +113,7 @@ public class AmbariDynamicServiceURLCreatorTest {
         validateServiceURLs(urls, HOSTNAMES, expectedScheme, HTTP_PORT, HTTP_PATH);
     }
 
+
     @Test
     public void testResourceManagerURLFromInternalMapping() throws Exception {
         testResourceManagerURL(null);
@@ -213,6 +215,35 @@ public class AmbariDynamicServiceURLCreatorTest {
         assertEquals("hdfs://" + ADDRESS, url);
     }
 
+
+    @Test
+    public void testNameNodeHAURLFromInternalMapping() throws Exception {
+        testNameNodeURLHA(null);
+    }
+
+    @Test
+    public void testNameNodeHAURLFromExternalMapping() throws Exception {
+        testNameNodeURLHA(TEST_MAPPING_CONFIG);
+    }
+
+    private void testNameNodeURLHA(Object mappingConfiguration) throws Exception {
+        final String NAMESERVICE = "myNSCluster";
+
+        AmbariComponent namenode = EasyMock.createNiceMock(AmbariComponent.class);
+        EasyMock.expect(namenode.getConfigProperty("dfs.nameservices")).andReturn(NAMESERVICE).anyTimes();
+        EasyMock.replay(namenode);
+
+        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
+        EasyMock.expect(cluster.getComponent("NAMENODE")).andReturn(namenode).anyTimes();
+        EasyMock.replay(cluster);
+
+        // Run the test
+        AmbariDynamicServiceURLCreator builder = newURLCreator(cluster, mappingConfiguration);
+        String url = builder.create("NAMENODE").get(0);
+        assertEquals("hdfs://" + NAMESERVICE, url);
+    }
+
+
     @Test
     public void testWebHCatURLFromInternalMapping() throws Exception {
         testWebHCatURL(null);
@@ -309,29 +340,6 @@ public class AmbariDynamicServiceURLCreatorTest {
         testWebHdfsURL(TEST_MAPPING_CONFIG);
     }
 
-    @Test
-    public void testWebHdfsURLFromSystemPropertyOverride() throws Exception {
-        // Write the test mapping configuration to a temp file
-        File mappingFile = File.createTempFile("mapping-config", "xml");
-        FileUtils.write(mappingFile, OVERRIDE_MAPPING_FILE_CONTENTS, "utf-8");
-
-        // Set the system property to point to the temp file
-        System.setProperty(AmbariDynamicServiceURLCreator.MAPPING_CONFIG_OVERRIDE_PROPERTY,
-                           mappingFile.getAbsolutePath());
-        try {
-            final String ADDRESS = "host3:1357";
-            // The URL creator should apply the file contents, and create the URL accordingly
-            String url = getTestWebHdfsURL(ADDRESS, null);
-
-            // Verify the URL matches the pattern from the file
-            assertEquals("http://" + ADDRESS + "/webhdfs/OVERRIDE", url);
-        } finally {
-            // Reset the system property, and delete the temp file
-            System.clearProperty(AmbariDynamicServiceURLCreator.MAPPING_CONFIG_OVERRIDE_PROPERTY);
-            mappingFile.delete();
-        }
-    }
-
     private void testWebHdfsURL(Object mappingConfiguration) throws Exception {
         final String ADDRESS = "host3:1357";
         assertEquals("http://" + ADDRESS + "/webhdfs", getTestWebHdfsURL(ADDRESS, mappingConfiguration));
@@ -350,8 +358,42 @@ public class AmbariDynamicServiceURLCreatorTest {
         EasyMock.replay(cluster);
 
         // Create the URL
-        AmbariDynamicServiceURLCreator creator = newURLCreator(cluster, mappingConfiguration);
-        return creator.create("WEBHDFS").get(0);
+        List<String> urls = ServiceURLFactory.newInstance(cluster).create("WEBHDFS");
+        assertNotNull(urls);
+        assertFalse(urls.isEmpty());
+        return urls.get(0);
+    }
+
+    @Test
+    public void testWebHdfsURLHA() throws Exception {
+        final String NAMESERVICES   = "myNameServicesCluster";
+        final String HTTP_ADDRESS_1 = "host1:50070";
+        final String HTTP_ADDRESS_2 = "host2:50077";
+
+        final String EXPECTED_ADDR_1 = "http://" + HTTP_ADDRESS_1 + "/webhdfs";
+        final String EXPECTED_ADDR_2 = "http://" + HTTP_ADDRESS_2 + "/webhdfs";
+
+        AmbariComponent namenode = EasyMock.createNiceMock(AmbariComponent.class);
+        EasyMock.expect(namenode.getConfigProperty("dfs.nameservices")).andReturn(NAMESERVICES).anyTimes();
+        EasyMock.replay(namenode);
+
+        AmbariCluster.ServiceConfiguration hdfsSC = EasyMock.createNiceMock(AmbariCluster.ServiceConfiguration.class);
+        Map<String, String> hdfsProps = new HashMap<>();
+        hdfsProps.put("dfs.namenode.http-address." + NAMESERVICES + ".nn1", HTTP_ADDRESS_1);
+        hdfsProps.put("dfs.namenode.http-address." + NAMESERVICES + ".nn2", HTTP_ADDRESS_2);
+        EasyMock.expect(hdfsSC.getProperties()).andReturn(hdfsProps).anyTimes();
+        EasyMock.replay(hdfsSC);
+
+        AmbariCluster cluster = EasyMock.createNiceMock(AmbariCluster.class);
+        EasyMock.expect(cluster.getComponent("NAMENODE")).andReturn(namenode).anyTimes();
+        EasyMock.expect(cluster.getServiceConfiguration("HDFS", "hdfs-site")).andReturn(hdfsSC).anyTimes();
+        EasyMock.replay(cluster);
+
+        // Create the URL
+        List<String> webhdfsURLs = ServiceURLFactory.newInstance(cluster).create("WEBHDFS");
+        assertEquals(2, webhdfsURLs.size());
+        assertTrue(webhdfsURLs.contains(EXPECTED_ADDR_1));
+        assertTrue(webhdfsURLs.contains(EXPECTED_ADDR_2));
     }
 
 
@@ -731,12 +773,24 @@ public class AmbariDynamicServiceURLCreatorTest {
             "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n" +
             "<service-discovery-url-mappings>\n" +
             "  <service name=\"NAMENODE\">\n" +
-            "    <url-pattern>hdfs://{DFS_NAMENODE_RPC_ADDRESS}</url-pattern>\n" +
+            "    <url-pattern>hdfs://{DFS_NAMENODE_ADDRESS}</url-pattern>\n" +
             "    <properties>\n" +
             "      <property name=\"DFS_NAMENODE_RPC_ADDRESS\">\n" +
             "        <component>NAMENODE</component>\n" +
             "        <config-property>dfs.namenode.rpc-address</config-property>\n" +
             "      </property>\n" +
+            "      <property name=\"DFS_NAMESERVICES\">\n" +
+            "        <component>NAMENODE</component>\n" +
+            "        <config-property>dfs.nameservices</config-property>\n" +
+            "      </property>\n" +
+            "      <property name=\"DFS_NAMENODE_ADDRESS\">\n" +
+            "        <config-property>\n" +
+            "          <if property=\"DFS_NAMESERVICES\">\n" +
+            "            <then>DFS_NAMESERVICES</then>\n" +
+            "            <else>DFS_NAMENODE_RPC_ADDRESS</else>\n" +
+            "          </if>\n" +
+            "        </config-property>\n" +
+            "      </property>\n" +
             "    </properties>\n" +
             "  </service>\n" +
             "\n" +
@@ -750,16 +804,6 @@ public class AmbariDynamicServiceURLCreatorTest {
             "    </properties>\n" +
             "  </service>\n" +
             "\n" +
-            "  <service name=\"WEBHDFS\">\n" +
-            "    <url-pattern>http://{WEBHDFS_ADDRESS}/webhdfs</url-pattern>\n" +
-            "    <properties>\n" +
-            "      <property name=\"WEBHDFS_ADDRESS\">\n" +
-            "        <service-config name=\"HDFS\">hdfs-site</service-config>\n" +
-            "        <config-property>dfs.namenode.http-address</config-property>\n" +
-            "      </property>\n" +
-            "    </properties>\n" +
-            "  </service>\n" +
-            "\n" +
             "  <service name=\"WEBHCAT\">\n" +
             "    <url-pattern>http://{HOST}:{PORT}/templeton</url-pattern>\n" +
             "    <properties>\n" +


[03/16] knox git commit: KNOX-1141

Posted by mo...@apache.org.
KNOX-1141


Project: http://git-wip-us.apache.org/repos/asf/knox/repo
Commit: http://git-wip-us.apache.org/repos/asf/knox/commit/7d42ffd0
Tree: http://git-wip-us.apache.org/repos/asf/knox/tree/7d42ffd0
Diff: http://git-wip-us.apache.org/repos/asf/knox/diff/7d42ffd0

Branch: refs/heads/KNOX-998-Package_Restructuring
Commit: 7d42ffd065c66d810a52a90f8c2dace55f3aceed
Parents: 2b77fe1
Author: Phil Zampino <pz...@apache.org>
Authored: Thu Dec 7 09:36:55 2017 -0500
Committer: Phil Zampino <pz...@apache.org>
Committed: Wed Jan 3 11:46:14 2018 -0500

----------------------------------------------------------------------
 .../discovery/ambari/AmbariClientCommon.java    | 14 ++++--
 .../ambari/AmbariConfigurationMonitor.java      | 52 ++++++++++++++++----
 2 files changed, 53 insertions(+), 13 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/knox/blob/7d42ffd0/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariClientCommon.java
----------------------------------------------------------------------
diff --git a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariClientCommon.java b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariClientCommon.java
index a2bf4ea..8e9dd26 100644
--- a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariClientCommon.java
+++ b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariClientCommon.java
@@ -53,10 +53,16 @@ class AmbariClientCommon {
 
     Map<String, Map<String, AmbariCluster.ServiceConfiguration>> getActiveServiceConfigurations(String clusterName,
                                                                                                 ServiceDiscoveryConfig config) {
-        return getActiveServiceConfigurations(config.getAddress(),
-                                              clusterName,
-                                              config.getUser(),
-                                              config.getPasswordAlias());
+        Map<String, Map<String, AmbariCluster.ServiceConfiguration>> activeConfigs = null;
+
+        if (config != null) {
+            activeConfigs = getActiveServiceConfigurations(config.getAddress(),
+                                                           clusterName,
+                                                           config.getUser(),
+                                                           config.getPasswordAlias());
+        }
+
+        return activeConfigs;
     }
 
 

http://git-wip-us.apache.org/repos/asf/knox/blob/7d42ffd0/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariConfigurationMonitor.java
----------------------------------------------------------------------
diff --git a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariConfigurationMonitor.java b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariConfigurationMonitor.java
index e4b5e43..8a6d95b 100644
--- a/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariConfigurationMonitor.java
+++ b/gateway-discovery-ambari/src/main/java/org/apache/hadoop/gateway/topology/discovery/ambari/AmbariConfigurationMonitor.java
@@ -115,8 +115,10 @@ class AmbariConfigurationMonitor implements ClusterConfigurationMonitor {
             Collection<File> persistedConfigs = FileUtils.listFiles(persistenceDir, new String[]{"conf"}, false);
             for (File persisted : persistedConfigs) {
                 Properties props = new Properties();
+                FileInputStream in = null;
                 try {
-                    props.load(new FileInputStream(persisted));
+                    in = new FileInputStream(persisted);
+                    props.load(in);
 
                     addDiscoveryConfig(props.getProperty(PROP_CLUSTER_NAME), new ServiceDiscoveryConfig() {
                                                             public String getAddress() {
@@ -133,6 +135,14 @@ class AmbariConfigurationMonitor implements ClusterConfigurationMonitor {
                                                         });
                 } catch (IOException e) {
                     log.failedToLoadClusterMonitorServiceDiscoveryConfig(getType(), e);
+                } finally {
+                    if (in != null) {
+                        try {
+                            in.close();
+                        } catch (IOException e) {
+                            //
+                        }
+                    }
                 }
             }
         }
@@ -145,11 +155,13 @@ class AmbariConfigurationMonitor implements ClusterConfigurationMonitor {
     private void loadClusterVersionData() {
         File persistenceDir = getPersistenceDir();
         if (persistenceDir != null) {
-            Collection<File> persistedConfigs = FileUtils.listFiles(getPersistenceDir(), new String[]{"ver"}, false);
+            Collection<File> persistedConfigs = FileUtils.listFiles(persistenceDir, new String[]{"ver"}, false);
             for (File persisted : persistedConfigs) {
                 Properties props = new Properties();
+                FileInputStream in = null;
                 try {
-                    props.load(new FileInputStream(persisted));
+                    in = new FileInputStream(persisted);
+                    props.load(in);
 
                     String source = props.getProperty(PROP_CLUSTER_SOURCE);
                     String clusterName = props.getProperty(PROP_CLUSTER_NAME);
@@ -166,6 +178,14 @@ class AmbariConfigurationMonitor implements ClusterConfigurationMonitor {
 
                 } catch (IOException e) {
                     log.failedToLoadClusterMonitorConfigVersions(getType(), e);
+                } finally {
+                    if (in != null) {
+                        try {
+                            in.close();
+                        } catch (IOException e) {
+                            //
+                        }
+                    }
                 }
             }
         }
@@ -207,10 +227,21 @@ class AmbariConfigurationMonitor implements ClusterConfigurationMonitor {
     }
 
     private void persist(Properties props, File dest) {
+        FileOutputStream out = null;
         try {
-            props.store(new FileOutputStream(dest), PERSISTED_FILE_COMMENT);
+            out = new FileOutputStream(dest);
+            props.store(out, PERSISTED_FILE_COMMENT);
+            out.flush();
         } catch (Exception e) {
             log.failedToPersistClusterMonitorData(getType(), dest.getAbsolutePath(), e);
+        } finally {
+            if (out != null) {
+                try {
+                    out.close();
+                } catch (IOException e) {
+                    //
+                }
+            }
         }
     }
 
@@ -433,12 +464,15 @@ class AmbariConfigurationMonitor implements ClusterConfigurationMonitor {
     Map<String, String> getUpdatedConfigVersions(String address, String clusterName) {
         Map<String, String> configVersions = new HashMap<>();
 
-        Map<String, Map<String, AmbariCluster.ServiceConfiguration>> serviceConfigs =
-                    ambariClient.getActiveServiceConfigurations(clusterName, getDiscoveryConfig(address, clusterName));
+        ServiceDiscoveryConfig sdc = getDiscoveryConfig(address, clusterName);
+        if (sdc != null) {
+            Map<String, Map<String, AmbariCluster.ServiceConfiguration>> serviceConfigs =
+                                                       ambariClient.getActiveServiceConfigurations(clusterName, sdc);
 
-        for (Map<String, AmbariCluster.ServiceConfiguration> serviceConfig : serviceConfigs.values()) {
-            for (AmbariCluster.ServiceConfiguration config : serviceConfig.values()) {
-                configVersions.put(config.getType(), config.getVersion());
+            for (Map<String, AmbariCluster.ServiceConfiguration> serviceConfig : serviceConfigs.values()) {
+                for (AmbariCluster.ServiceConfiguration config : serviceConfig.values()) {
+                    configVersions.put(config.getType(), config.getVersion());
+                }
             }
         }
 


[08/16] knox git commit: KNOX-1116 - Builds of src distributions result in unexpected result from gateway version API.

Posted by mo...@apache.org.
KNOX-1116 - Builds of src distributions result in unexpected result from gateway version API.


Project: http://git-wip-us.apache.org/repos/asf/knox/repo
Commit: http://git-wip-us.apache.org/repos/asf/knox/commit/6d4756f3
Tree: http://git-wip-us.apache.org/repos/asf/knox/tree/6d4756f3
Diff: http://git-wip-us.apache.org/repos/asf/knox/diff/6d4756f3

Branch: refs/heads/KNOX-998-Package_Restructuring
Commit: 6d4756f3d6b6c16949aec2580a5b1698ea69a3eb
Parents: e0adfbd
Author: Colm O hEigeartaigh <co...@apache.org>
Authored: Thu Jan 4 11:22:30 2018 +0000
Committer: Colm O hEigeartaigh <co...@apache.org>
Committed: Thu Jan 4 11:22:30 2018 +0000

----------------------------------------------------------------------
 pom.xml | 3 +++
 1 file changed, 3 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/knox/blob/6d4756f3/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index aae453e..fd7f62b 100644
--- a/pom.xml
+++ b/pom.xml
@@ -281,6 +281,9 @@
                         </goals>
                     </execution>
                 </executions>
+                <configuration>
+                    <revisionOnScmFailure>${gateway-version}</revisionOnScmFailure>
+                </configuration>
             </plugin>
             <plugin>
                 <groupId>org.apache.maven.plugins</groupId>