You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@knox.apache.org by lm...@apache.org on 2019/07/23 21:27:16 UTC

svn commit: r1863668 [2/9] - in /knox: site/ site/books/knox-0-12-0/ site/books/knox-0-13-0/ site/books/knox-0-14-0/ site/books/knox-1-0-0/ site/books/knox-1-1-0/ site/books/knox-1-2-0/ site/books/knox-1-3-0/ trunk/ trunk/books/1.4.0/ trunk/books/1.4.0...

Added: knox/trunk/books/1.4.0/book_client-details.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/book_client-details.md?rev=1863668&view=auto
==============================================================================
--- knox/trunk/books/1.4.0/book_client-details.md (added)
+++ knox/trunk/books/1.4.0/book_client-details.md Tue Jul 23 21:27:15 2019
@@ -0,0 +1,692 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+## Client Details ##
+The KnoxShell release artifact provides a small footprint client environment that removes all unnecessary server dependencies, configuration, binary scripts, etc. It is comprised a couple different things that empower different sorts of users.
+
+* A set of SDK type classes for providing access to Hadoop resources over HTTP
+* A Groovy based DSL for scripting access to Hadoop resources based on the underlying SDK classes
+* A KnoxShell Token based Sessions to provide a CLI SSO session for executing multiple scripts
+
+The following sections provide an overview and quickstart for the KnoxShell.
+
+### Client Quickstart ###
+The following installation and setup instructions should get you started with using the KnoxShell very quickly.
+
+1. Download a knoxshell-x.x.x.zip or tar file and unzip it in your preferred location `{GATEWAY_CLIENT_HOME}`
+
+        home:knoxshell-0.12.0 larry$ ls -l
+        total 296
+        -rw-r--r--@  1 larry  staff  71714 Mar 14 14:06 LICENSE
+        -rw-r--r--@  1 larry  staff    164 Mar 14 14:06 NOTICE
+        -rw-r--r--@  1 larry  staff  71714 Mar 15 20:04 README
+        drwxr-xr-x@ 12 larry  staff    408 Mar 15 21:24 bin
+        drwxr--r--@  3 larry  staff    102 Mar 14 14:06 conf
+        drwxr-xr-x+  3 larry  staff    102 Mar 15 12:41 logs
+        drwxr-xr-x@ 18 larry  staff    612 Mar 14 14:18 samples
+        
+    |Directory    | Description |
+    |-------------|-------------|
+    |bin          |contains the main knoxshell jar and related shell scripts|
+    |conf         |only contains log4j config|
+    |logs         |contains the knoxshell.log file|
+    |samples      |has numerous examples to help you get started|
+
+2. cd `{GATEWAY_CLIENT_HOME}`
+3. Get/setup truststore for the target Knox instance or fronting load balancer
+    - if you have access to the server you may use the command `knoxcli.sh export-cert --type JKS`
+    - copy the resulting `gateway-client-identity.jks` to your user home directory
+4. Execute the an example script from the `{GATEWAY_CLIENT_HOME}/samples` directory - for instance:
+    - `bin/knoxshell.sh samples/ExampleWebHdfsLs.groovy`
+    
+            home:knoxshell-0.12.0 larry$ bin/knoxshell.sh samples/ExampleWebHdfsLs.groovy
+            Enter username: guest
+            Enter password:
+            [app-logs, apps, mapred, mr-history, tmp, user]
+
+At this point, you should have seen something similar to the above output - probably with different directories listed. You should get the idea from the above. Take a look at the sample that we ran above:
+
+    import groovy.json.JsonSlurper
+    import org.apache.knox.gateway.shell.Hadoop
+    import org.apache.knox.gateway.shell.hdfs.Hdfs
+
+    import org.apache.knox.gateway.shell.Credentials
+
+    gateway = "https://localhost:8443/gateway/sandbox"
+
+    credentials = new Credentials()
+    credentials.add("ClearInput", "Enter username: ", "user")
+                    .add("HiddenInput", "Enter pas" + "sword: ", "pass")
+    credentials.collect()
+
+    username = credentials.get("user").string()
+    pass = credentials.get("pass").string()
+
+    session = Hadoop.login( gateway, username, pass )
+
+    text = Hdfs.ls( session ).dir( "/" ).now().string
+    json = (new JsonSlurper()).parseText( text )
+    println json.FileStatuses.FileStatus.pathSuffix
+    session.shutdown()
+
+Some things to note about this sample:
+
+1. The gateway URL is hardcoded
+    - Alternatives would be passing it as an argument to the script, using an environment variable or prompting for it with a ClearInput credential collector
+2. Credential collectors are used to gather credentials or other input from various sources. In this sample the HiddenInput and ClearInput collectors prompt the user for the input with the provided prompt text and the values are acquired by a subsequent get call with the provided name value.
+3. The Hadoop.login method establishes a login session of sorts which will need to be provided to the various API classes as an argument.
+4. The response text is easily retrieved as a string and can be parsed by the JsonSlurper or whatever you like
+
+### Client Token Sessions ###
+Building on the Quickstart above we will drill into some of the token session details here and walk through another sample.
+
+Unlike the quickstart, token sessions require the server to be configured in specific ways to allow the use of token sessions/federation.
+
+#### Server Setup ####
+1. KnoxToken service should be added to your `sandbox.xml` topology - see the [KnoxToken Configuration Section] (#KnoxToken+Configuration)
+
+        <service>
+           <role>KNOXTOKEN</role>
+           <param>
+              <name>knox.token.ttl</name>
+              <value>36000000</value>
+           </param>
+           <param>
+              <name>knox.token.audiences</name>
+              <value>tokenbased</value>
+           </param>
+           <param>
+              <name>knox.token.target.url</name>
+              <value>https://localhost:8443/gateway/tokenbased</value>
+           </param>
+        </service>
+
+2. `tokenbased.xml` topology to accept tokens as federation tokens for access to exposed resources with JWTProvider [JWT Provider](#JWT+Provider)
+
+        <provider>
+           <role>federation</role>
+           <name>JWTProvider</name>
+           <enabled>true</enabled>
+           <param>
+               <name>knox.token.audiences</name>
+               <value>tokenbased</value>
+           </param>
+        </provider>
+
+3. Use the KnoxShell token commands to establish and manage your session
+    - bin/knoxshell.sh init https://localhost:8443/gateway/sandbox to acquire a token and cache in user home directory
+    - bin/knoxshell.sh list to display the details of the cached token, the expiration time and optionally the target url
+    - bin/knoxshell destroy to remove the cached session token and terminate the session
+
+4. Execute a script that can take advantage of the token credential collector and target url
+
+        import groovy.json.JsonSlurper
+        import java.util.HashMap
+        import java.util.Map
+        import org.apache.knox.gateway.shell.Credentials
+        import org.apache.knox.gateway.shell.Hadoop
+        import org.apache.knox.gateway.shell.hdfs.Hdfs
+
+        credentials = new Credentials()
+        credentials.add("KnoxToken", "none: ", "token")
+        credentials.collect()
+
+        token = credentials.get("token").string()
+
+        gateway = System.getenv("KNOXSHELL_TOPOLOGY_URL")
+        if (gateway == null || gateway.equals("")) {
+          gateway = credentials.get("token").getTargetUrl()
+        }
+
+        println ""
+        println "*****************************GATEWAY INSTANCE**********************************"
+        println gateway
+        println "*******************************************************************************"
+        println ""
+
+        headers = new HashMap()
+        headers.put("Authorization", "Bearer " + token)
+
+        session = Hadoop.login( gateway, headers )
+
+        if (args.length > 0) {
+          dir = args[0]
+        } else {
+          dir = "/"
+        }
+
+        text = Hdfs.ls( session ).dir( dir ).now().string
+        json = (new JsonSlurper()).parseText( text )
+        statuses = json.get("FileStatuses");
+
+        println statuses
+
+        session.shutdown()
+
+Note the following about the above sample script:
+
+1. Use of the KnoxToken credential collector
+2. Use of the targetUrl from the credential collector
+3. Optional override of the target url with environment variable
+4. The passing of the headers map to the session creation in Hadoop.login
+5. The passing of an argument for the ls command for the path to list or default to "/"
+
+Also note that there is no reason to prompt for username and password as long as the token has not been destroyed or expired.
+There is also no hardcoded endpoint for using the token - it is specified in the token cache or overridden by environment variable.
+
+## Client DSL and SDK Details ##
+
+The lack of any formal SDK or client for REST APIs in Hadoop led to thinking about a very simple client that could help people use and evaluate the gateway.
+The list below outlines the general requirements for such a client.
+
+* Promote the evaluation and adoption of the Apache Knox Gateway
+* Simple to deploy and use on data worker desktops for access to remote Hadoop clusters
+* Simple to extend with new commands both by other Hadoop projects and by the end user
+* Support the notion of a SSO session for multiple Hadoop interactions
+* Support the multiple authentication and federation token capabilities of the Apache Knox Gateway
+* Promote the use of REST APIs as the dominant remote client mechanism for Hadoop services
+* Promote the sense of Hadoop as a single unified product
+* Aligned with the Apache Knox Gateway's overall goals for security
+
+The result is a very simple DSL ([Domain Specific Language](http://en.wikipedia.org/wiki/Domain-specific_language)) of sorts that is used via [Groovy](http://groovy.codehaus.org) scripts.
+Here is an example of a command that copies a file from the local file system to HDFS.
+
+_Note: The variables `session`, `localFile` and `remoteFile` are assumed to be defined._
+
+    Hdfs.put(session).file(localFile).to(remoteFile).now()
+
+*This work is in very early development but is already very useful in its current state.*
+*We are very interested in receiving feedback about how to improve this feature and the DSL in particular.*
+
+A note of thanks to [REST-assured](https://code.google.com/p/rest-assured/) which provides a [Fluent interface](http://en.wikipedia.org/wiki/Fluent_interface) style DSL for testing REST services.
+It served as the initial inspiration for the creation of this DSL.
+
+### Assumptions ###
+
+This document assumes a few things about your environment in order to simplify the examples.
+
+* The JVM is executable as simply `java`.
+* The Apache Knox Gateway is installed and functional.
+* The example commands are executed within the context of the `GATEWAY_HOME` current directory.
+The `GATEWAY_HOME` directory is the directory within the Apache Knox Gateway installation that contains the README file and the bin, conf and deployments directories.
+* A few examples require the use of commands from a standard Groovy installation.  These examples are optional but to try them you will need Groovy [installed](http://groovy.codehaus.org/Installing+Groovy).
+
+
+### Basics ###
+
+In order for secure connections to be made to the Knox gateway server over SSL, the user will need to trust
+the certificate presented by the gateway while connecting. The knoxcli command export-cert may be used to get
+access the gateway-identity cert. It can then be imported into cacerts on the client machine or put into a
+keystore that will be discovered in:
+
+* The user's home directory
+* In a directory specified in an environment variable: `KNOX_CLIENT_TRUSTSTORE_DIR`
+* In a directory specified with the above variable with the keystore filename specified in the variable: `KNOX_CLIENT_TRUSTSTORE_FILENAME`
+* Default password "changeit" or password may be specified in environment variable: `KNOX_CLIENT_TRUSTSTORE_PASS`
+* Or the JSSE system property `javax.net.ssl.trustStore` can be used to specify its location
+
+The DSL requires a shell to interpret the Groovy script.
+The shell can either be used interactively or to execute a script file.
+To simplify use, the distribution contains an embedded version of the Groovy shell.
+
+The shell can be run interactively. Use the command `exit` to exit.
+
+    java -jar bin/shell.jar
+
+When running interactively it may be helpful to reduce some of the output generated by the shell console.
+Use the following command in the interactive shell to reduce that output.
+This only needs to be done once as these preferences are persisted.
+
+    set verbosity QUIET
+    set show-last-result false
+
+Also when running interactively use the `exit` command to terminate the shell.
+Using `^C` to exit can sometimes leaves the parent shell in a problematic state.
+
+The shell can also be used to execute a script by passing a single filename argument.
+
+    java -jar bin/shell.jar samples/ExampleWebHdfsPutGet.groovy
+
+
+### Examples ###
+
+Once the shell can be launched the DSL can be used to interact with the gateway and Hadoop.
+Below is a very simple example of an interactive shell session to upload a file to HDFS.
+
+    java -jar bin/shell.jar
+    knox:000> session = Hadoop.login( "https://localhost:8443/gateway/sandbox", "guest", "guest-password" )
+    knox:000> Hdfs.put( session ).file( "README" ).to( "/tmp/example/README" ).now()
+
+The `knox:000>` in the example above is the prompt from the embedded Groovy console.
+If you output doesn't look like this you may need to set the verbosity and show-last-result preferences as described above in the Usage section.
+
+If you receive an error `HTTP/1.1 403 Forbidden` it may be because that file already exists.
+Try deleting it with the following command and then try again.
+
+    knox:000> Hdfs.rm(session).file("/tmp/example/README").now()
+
+Without using some other tool to browse HDFS it is hard to tell that this command did anything.
+Execute this to get a bit more feedback.
+
+    knox:000> println "Status=" + Hdfs.put( session ).file( "README" ).to( "/tmp/example/README2" ).now().statusCode
+    Status=201
+
+Notice that a different filename is used for the destination.
+Without this an error would have resulted.
+Of course the DSL also provides a command to list the contents of a directory.
+
+    knox:000> println Hdfs.ls( session ).dir( "/tmp/example" ).now().string
+    {"FileStatuses":{"FileStatus":[{"accessTime":1363711366977,"blockSize":134217728,"group":"hdfs","length":19395,"modificationTime":1363711366977,"owner":"guest","pathSuffix":"README","permission":"644","replication":1,"type":"FILE"},{"accessTime":1363711375617,"blockSize":134217728,"group":"hdfs","length":19395,"modificationTime":1363711375617,"owner":"guest","pathSuffix":"README2","permission":"644","replication":1,"type":"FILE"}]}}
+
+It is a design decision of the DSL to not provide type safe classes for various request and response payloads.
+Doing so would provide an undesirable coupling between the DSL and the service implementation.
+It also would make adding new commands much more difficult.
+See the Groovy section below for a variety capabilities and tools for working with JSON and XML to make this easy.
+The example below shows the use of JsonSlurper and GPath to extract content from a JSON response.
+
+    knox:000> import groovy.json.JsonSlurper
+    knox:000> text = Hdfs.ls( session ).dir( "/tmp/example" ).now().string
+    knox:000> json = (new JsonSlurper()).parseText( text )
+    knox:000> println json.FileStatuses.FileStatus.pathSuffix
+    [README, README2]
+
+*In the future, "built-in" methods to slurp JSON and XML may be added to make this a bit easier.*
+*This would allow for the following type of single line interaction:*
+
+    println Hdfs.ls(session).dir("/tmp").now().json().FileStatuses.FileStatus.pathSuffix
+
+Shell sessions should always be ended with shutting down the session.
+The examples above do not touch on it but the DSL supports the simple execution of commands asynchronously.
+The shutdown command attempts to ensures that all asynchronous commands have completed before existing the shell.
+
+    knox:000> session.shutdown()
+    knox:000> exit
+
+All of the commands above could have been combined into a script file and executed as a single line.
+
+    java -jar bin/shell.jar samples/ExampleWebHdfsPutGet.groovy
+
+This would be the content of that script.
+
+    import org.apache.knox.gateway.shell.Hadoop
+    import org.apache.knox.gateway.shell.hdfs.Hdfs
+    import groovy.json.JsonSlurper
+    
+    gateway = "https://localhost:8443/gateway/sandbox"
+    username = "guest"
+    password = "guest-password"
+    dataFile = "README"
+    
+    session = Hadoop.login( gateway, username, password )
+    Hdfs.rm( session ).file( "/tmp/example" ).recursive().now()
+    Hdfs.put( session ).file( dataFile ).to( "/tmp/example/README" ).now()
+    text = Hdfs.ls( session ).dir( "/tmp/example" ).now().string
+    json = (new JsonSlurper()).parseText( text )
+    println json.FileStatuses.FileStatus.pathSuffix
+    session.shutdown()
+    exit
+
+Notice the `Hdfs.rm` command.  This is included simply to ensure that the script can be rerun.
+Without this an error would result the second time it is run.
+
+### Futures ###
+
+The DSL supports the ability to invoke commands asynchronously via the later() invocation method.
+The object returned from the `later()` method is a `java.util.concurrent.Future` parameterized with the response type of the command.
+This is an example of how to asynchronously put a file to HDFS.
+
+    future = Hdfs.put(session).file("README").to("/tmp/example/README").later()
+    println future.get().statusCode
+
+The `future.get()` method will block until the asynchronous command is complete.
+To illustrate the usefulness of this however multiple concurrent commands are required.
+
+    readmeFuture = Hdfs.put(session).file("README").to("/tmp/example/README").later()
+    licenseFuture = Hdfs.put(session).file("LICENSE").to("/tmp/example/LICENSE").later()
+    session.waitFor( readmeFuture, licenseFuture )
+    println readmeFuture.get().statusCode
+    println licenseFuture.get().statusCode
+
+The `session.waitFor()` method will wait for one or more asynchronous commands to complete.
+
+
+### Closures ###
+
+Futures alone only provide asynchronous invocation of the command.
+What if some processing should also occur asynchronously once the command is complete.
+Support for this is provided by closures.
+Closures are blocks of code that are passed into the `later()` invocation method.
+In Groovy these are contained within `{}` immediately after a method.
+These blocks of code are executed once the asynchronous command is complete.
+
+    Hdfs.put(session).file("README").to("/tmp/example/README").later(){ println it.statusCode }
+
+In this example the `put()` command is executed on a separate thread and once complete the `println it.statusCode` block is executed on that thread.
+The `it` variable is automatically populated by Groovy and is a reference to the result that is returned from the future or `now()` method.
+The future example above can be rewritten to illustrate the use of closures.
+
+    readmeFuture = Hdfs.put(session).file("README").to("/tmp/example/README").later() { println it.statusCode }
+    licenseFuture = Hdfs.put(session).file("LICENSE").to("/tmp/example/LICENSE").later() { println it.statusCode }
+    session.waitFor( readmeFuture, licenseFuture )
+
+Again, the `session.waitFor()` method will wait for one or more asynchronous commands to complete.
+
+
+### Constructs ###
+
+In order to understand the DSL there are three primary constructs that need to be understood.
+
+
+#### Session ####
+
+This construct encapsulates the client side session state that will be shared between all command invocations.
+In particular it will simplify the management of any tokens that need to be presented with each command invocation.
+It also manages a thread pool that is used by all asynchronous commands which is why it is important to call one of the shutdown methods.
+
+The syntax associated with this is expected to change. We expect that credentials will not need to be provided to the gateway. Rather it is expected that some form of access token will be used to initialize the session.
+
+
+#### Services ####
+
+Services are the primary extension point for adding new suites of commands.
+The current built-in examples are: Hdfs, Job and Workflow.
+The desire for extensibility is the reason for the slightly awkward `Hdfs.ls(session)` syntax.
+Certainly something more like `session.hdfs().ls()` would have been preferred but this would prevent adding new commands easily.
+At a minimum it would result in extension commands with a different syntax from the "built-in" commands.
+
+The service objects essentially function as a factory for a suite of commands.
+
+
+#### Commands ####
+
+Commands provide the behavior of the DSL.
+They typically follow a Fluent interface style in order to allow for single line commands.
+There are really three parts to each command: Request, Invocation, Response
+
+
+#### Request ####
+
+The request is populated by all of the methods following the "verb" method and the "invoke" method.
+For example in `Hdfs.rm(session).ls(dir).now()` the request is populated between the "verb" method `rm()` and the "invoke" method `now()`.
+
+
+#### Invocation ####
+
+The invocation method controls how the request is invoked.
+Currently supported synchronous and asynchronous invocation.
+The `now()` method executes the request and returns the result immediately.
+The `later()` method submits the request to be executed later and returns a future from which the result can be retrieved.
+In addition `later()` invocation method can optionally be provided a closure to execute when the request is complete.
+See the Futures and Closures sections below for additional detail and examples.
+
+
+#### Response ####
+
+The response contains the results of the invocation of the request.
+In most cases the response is a thin wrapper over the HTTP response.
+In fact many commands will share a single BasicResponse type that only provides a few simple methods.
+
+    public int getStatusCode()
+    public long getContentLength()
+    public String getContentType()
+    public String getContentEncoding()
+    public InputStream getStream()
+    public String getString()
+    public byte[] getBytes()
+    public void close();
+
+Thanks to Groovy these methods can be accessed as attributes.
+In the some of the examples the staticCode was retrieved for example.
+
+    println Hdfs.put(session).rm(dir).now().statusCode
+
+Groovy will invoke the getStatusCode method to retrieve the statusCode attribute.
+
+The three methods `getStream()`, `getBytes()` and `getString()` deserve special attention.
+Care must be taken that the HTTP body is fully read once and only once.
+Therefore one of these methods (and only one) must be called once and only once.
+Calling one of these more than once will cause an error.
+Failing to call one of these methods once will result in lingering open HTTP connections.
+The `close()` method may be used if the caller is not interested in reading the result body.
+Most commands that do not expect a response body will call close implicitly.
+If the body is retrieved via `getBytes()` or `getString()`, the `close()` method need not be called.
+When using `getStream()`, care must be taken to consume the entire body otherwise lingering open HTTP connections will result.
+The `close()` method may be called after reading the body partially to discard the remainder of the body.
+
+
+### Services ###
+
+The built-in supported client DSL for each Hadoop service can be found in the #[Service Details] section.
+
+
+### Extension ###
+
+Extensibility is a key design goal of the KnoxShell and client DSL.
+There are two ways to provide extended functionality for use with the shell.
+The first is to simply create Groovy scripts that use the DSL to perform a useful task.
+The second is to add new services and commands.
+In order to add new service and commands new classes must be written in either Groovy or Java and added to the classpath of the shell.
+Fortunately there is a very simple way to add classes and JARs to the shell classpath.
+The first time the shell is executed it will create a configuration file in the same directory as the JAR with the same base name and a `.cfg` extension.
+
+    bin/shell.jar
+    bin/shell.cfg
+
+That file contains both the main class for the shell as well as a definition of the classpath.
+Currently that file will by default contain the following.
+
+    main.class=org.apache.knox.gateway.shell.Shell
+    class.path=../lib; ../lib/*.jar; ../ext; ../ext/*.jar
+
+Therefore to extend the shell you should copy any new service and command class either to the `ext` directory or if they are packaged within a JAR copy the JAR to the `ext` directory.
+The `lib` directory is reserved for JARs that may be delivered with the product.
+
+Below are samples for the service and command classes that would need to be written to add new commands to the shell.
+These happen to be Groovy source files but could - with very minor changes - be Java files.
+The easiest way to add these to the shell is to compile them directly into the `ext` directory.
+*Note: This command depends upon having the Groovy compiler installed and available on the execution path.*
+
+    groovy -d ext -cp bin/shell.jar samples/SampleService.groovy \
+        samples/SampleSimpleCommand.groovy samples/SampleComplexCommand.groovy
+
+These source files are available in the samples directory of the distribution but are included here for convenience.
+
+
+#### Sample Service (Groovy)
+
+    import org.apache.knox.gateway.shell.Hadoop
+
+    class SampleService {
+
+        static String PATH = "/webhdfs/v1"
+
+        static SimpleCommand simple( Hadoop session ) {
+            return new SimpleCommand( session )
+        }
+
+        static ComplexCommand.Request complex( Hadoop session ) {
+            return new ComplexCommand.Request( session )
+        }
+
+    }
+
+#### Sample Simple Command (Groovy)
+
+    import org.apache.knox.gateway.shell.AbstractRequest
+    import org.apache.knox.gateway.shell.BasicResponse
+    import org.apache.knox.gateway.shell.Hadoop
+    import org.apache.http.client.methods.HttpGet
+    import org.apache.http.client.utils.URIBuilder
+
+    import java.util.concurrent.Callable
+
+    class SimpleCommand extends AbstractRequest<BasicResponse> {
+
+        SimpleCommand( Hadoop session ) {
+            super( session )
+        }
+
+        private String param
+        SimpleCommand param( String param ) {
+            this.param = param
+            return this
+        }
+
+        @Override
+        protected Callable<BasicResponse> callable() {
+            return new Callable<BasicResponse>() {
+                @Override
+                BasicResponse call() {
+                    URIBuilder uri = uri( SampleService.PATH, param )
+                    addQueryParam( uri, "op", "LISTSTATUS" )
+                    HttpGet get = new HttpGet( uri.build() )
+                    return new BasicResponse( execute( get ) )
+                }
+            }
+        }
+
+    }
+
+
+#### Sample Complex Command (Groovy)
+
+    import com.jayway.jsonpath.JsonPath
+    import org.apache.knox.gateway.shell.AbstractRequest
+    import org.apache.knox.gateway.shell.BasicResponse
+    import org.apache.knox.gateway.shell.Hadoop
+    import org.apache.http.HttpResponse
+    import org.apache.http.client.methods.HttpGet
+    import org.apache.http.client.utils.URIBuilder
+
+    import java.util.concurrent.Callable
+
+    class ComplexCommand {
+
+        static class Request extends AbstractRequest<Response> {
+
+            Request( Hadoop session ) {
+                super( session )
+            }
+
+            private String param;
+            Request param( String param ) {
+                this.param = param;
+                return this;
+            }
+
+            @Override
+            protected Callable<Response> callable() {
+                return new Callable<Response>() {
+                    @Override
+                    Response call() {
+                        URIBuilder uri = uri( SampleService.PATH, param )
+                        addQueryParam( uri, "op", "LISTSTATUS" )
+                        HttpGet get = new HttpGet( uri.build() )
+                        return new Response( execute( get ) )
+                    }
+                }
+            }
+
+        }
+
+        static class Response extends BasicResponse {
+
+            Response(HttpResponse response) {
+                super(response)
+            }
+
+            public List<String> getNames() {
+                return JsonPath.read( string, "\$.FileStatuses.FileStatus[*].pathSuffix" )
+            }
+
+        }
+
+    }
+
+
+### Groovy
+
+The shell included in the distribution is basically an unmodified packaging of the Groovy shell.
+The distribution does however provide a wrapper that makes it very easy to setup the class path for the shell.
+In fact the JARs required to execute the DSL are included on the class path by default.
+Therefore these command are functionally equivalent if you have Groovy installed.
+See below for a description of what is required for JARs required by the DSL from `lib` and `dep` directories.
+
+    java -jar bin/shell.jar samples/ExampleWebHdfsPutGet.groovy
+    groovy -classpath {JARs required by the DSL from lib and dep} samples/ExampleWebHdfsPutGet.groovy
+
+The interactive shell isn't exactly equivalent.
+However the only difference is that the shell.jar automatically executes some additional imports that are useful for the KnoxShell client DSL.
+So these two sets of commands should be functionality equivalent.
+*However there is currently a class loading issue that prevents the groovysh command from working properly.*
+
+    java -jar bin/shell.jar
+
+    groovysh -classpath {JARs required by the DSL from lib and dep}
+    import org.apache.knox.gateway.shell.Hadoop
+    import org.apache.knox.gateway.shell.hdfs.Hdfs
+    import org.apache.knox.gateway.shell.job.Job
+    import org.apache.knox.gateway.shell.workflow.Workflow
+    import java.util.concurrent.TimeUnit
+
+Alternatively, you can use the Groovy Console which does not appear to have the same class loading issue.
+
+    groovyConsole -classpath {JARs required by the DSL from lib and dep}
+
+    import org.apache.knox.gateway.shell.Hadoop
+    import org.apache.knox.gateway.shell.hdfs.Hdfs
+    import org.apache.knox.gateway.shell.job.Job
+    import org.apache.knox.gateway.shell.workflow.Workflow
+    import java.util.concurrent.TimeUnit
+
+The JARs currently required by the client DSL are
+
+    lib/gateway-shell-{GATEWAY_VERSION}.jar
+    dep/httpclient-4.3.6.jar
+    dep/httpcore-4.3.3.jar
+    dep/commons-lang3-3.4.jar
+    dep/commons-codec-1.7.jar
+
+So on Linux/MacOS you would need this command
+
+    groovy -cp lib/gateway-shell-0.10.0.jar:dep/httpclient-4.3.6.jar:dep/httpcore-4.3.3.jar:dep/commons-lang3-3.4.jar:dep/commons-codec-1.7.jar samples/ExampleWebHdfsPutGet.groovy
+
+and on Windows you would need this command
+
+    groovy -cp lib/gateway-shell-0.10.0.jar;dep/httpclient-4.3.6.jar;dep/httpcore-4.3.3.jar;dep/commons-lang3-3.4.jar;dep/commons-codec-1.7.jar samples/ExampleWebHdfsPutGet.groovy
+
+The exact list of required JARs is likely to change from release to release so it is recommended that you utilize the wrapper `bin/shell.jar`.
+
+In addition because the DSL can be used via standard Groovy, the Groovy integrations in many popular IDEs (e.g. IntelliJ, Eclipse) can also be used.
+This makes it particularly nice to develop and execute scripts to interact with Hadoop.
+The code-completion features in modern IDEs in particular provides immense value.
+All that is required is to add the `gateway-shell-{GATEWAY_VERSION}.jar` to the projects class path.
+
+There are a variety of Groovy tools that make it very easy to work with the standard interchange formats (i.e. JSON and XML).
+In Groovy the creation of XML or JSON is typically done via a "builder" and parsing done via a "slurper".
+In addition once JSON or XML is "slurped" the GPath, an XPath like feature build into Groovy can be used to access data.
+
+* XML
+    * Markup Builder [Overview](http://groovy.codehaus.org/Creating+XML+using+Groovy's+MarkupBuilder), [API](http://groovy.codehaus.org/api/groovy/xml/MarkupBuilder.html)
+    * XML Slurper [Overview](http://groovy.codehaus.org/Reading+XML+using+Groovy's+XmlSlurper), [API](http://groovy.codehaus.org/api/groovy/util/XmlSlurper.html)
+    * XPath [Overview](http://groovy.codehaus.org/GPath), [API](http://docs.oracle.com/javase/1.5.0/docs/api/javax/xml/xpath/XPath.html)
+* JSON
+    * JSON Builder [API](http://groovy.codehaus.org/gapi/groovy/json/JsonBuilder.html)
+    * JSON Slurper [API](http://groovy.codehaus.org/gapi/groovy/json/JsonSlurper.html)
+    * JSON Path [API](https://code.google.com/p/json-path/)
+    * GPath [Overview](http://groovy.codehaus.org/GPath)
+

Added: knox/trunk/books/1.4.0/book_gateway-details.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/book_gateway-details.md?rev=1863668&view=auto
==============================================================================
--- knox/trunk/books/1.4.0/book_gateway-details.md (added)
+++ knox/trunk/books/1.4.0/book_gateway-details.md Tue Jul 23 21:27:15 2019
@@ -0,0 +1,107 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+## Gateway Details ##
+
+This section describes the details of the Knox Gateway itself. Including:
+
+* How URLs are mapped between a gateway that services multiple Hadoop clusters and the clusters themselves
+* How the gateway is configured through `gateway-site.xml` and cluster specific topology files
+* How to configure the various policy enforcement provider features such as authentication, authorization, auditing, hostmapping, etc.
+
+### URL Mapping ###
+
+The gateway functions much like a reverse proxy.
+As such, it maintains a mapping of URLs that are exposed externally by the gateway to URLs that are provided by the Hadoop cluster.
+
+#### Default Topology URLs #####
+In order to provide compatibility with the Hadoop Java client and existing CLI tools, the Knox Gateway has provided a feature called the _Default Topology_. This refers to a topology deployment that will be able to route URLs without the additional context that the gateway uses for differentiating from one Hadoop cluster to another. This allows the URLs to match those used by existing clients that may access WebHDFS through the Hadoop file system abstraction.
+
+When a topology file is deployed with a file name that matches the configured default topology name, a specialized mapping for URLs is installed for that particular topology. This allows the URLs that are expected by the existing Hadoop CLIs for WebHDFS to be used in interacting with the specific Hadoop cluster that is represented by the default topology file.
+
+The configuration for the default topology name is found in `gateway-site.xml` as a property called: `default.app.topology.name`.
+
+The default value for this property is empty.
+
+
+When deploying the `sandbox.xml` topology and setting `default.app.topology.name` to `sandbox`, both of the following example URLs work for the same underlying Hadoop cluster:
+
+    https://{gateway-host}:{gateway-port}/webhdfs
+    https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/webhdfs
+
+These default topology URLs exist for all of the services in the topology.
+
+#### Fully Qualified URLs #####
+Examples of mappings for WebHDFS, WebHCat, Oozie and HBase are shown below.
+These mapping are generated from the combination of the gateway configuration file (i.e. `{GATEWAY_HOME}/conf/gateway-site.xml`) and the cluster topology descriptors (e.g. `{GATEWAY_HOME}/conf/topologies/{cluster-name}.xml`).
+The port numbers shown for the Cluster URLs represent the default ports for these services.
+The actual port number may be different for a given cluster.
+
+* WebHDFS
+    * Gateway: `https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/webhdfs`
+    * Cluster: `http://{webhdfs-host}:50070/webhdfs`
+* WebHCat (Templeton)
+    * Gateway: `https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/templeton`
+    * Cluster: `http://{webhcat-host}:50111/templeton}`
+* Oozie
+    * Gateway: `https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/oozie`
+    * Cluster: `http://{oozie-host}:11000/oozie}`
+* HBase
+    * Gateway: `https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/hbase`
+    * Cluster: `http://{hbase-host}:8080`
+* Hive JDBC
+    * Gateway: `jdbc:hive2://{gateway-host}:{gateway-port}/;ssl=true;sslTrustStore={gateway-trust-store-path};trustStorePassword={gateway-trust-store-password};transportMode=http;httpPath={gateway-path}/{cluster-name}/hive`
+    * Cluster: `http://{hive-host}:10001/cliservice`
+
+The values for `{gateway-host}`, `{gateway-port}`, `{gateway-path}` are provided via the gateway configuration file (i.e. `{GATEWAY_HOME}/conf/gateway-site.xml`).
+
+The value for `{cluster-name}` is derived from the file name of the cluster topology descriptor (e.g. `{GATEWAY_HOME}/deployments/{cluster-name}.xml`).
+
+The value for `{webhdfs-host}`, `{webhcat-host}`, `{oozie-host}`, `{hbase-host}` and `{hive-host}` are provided via the cluster topology descriptor (e.g. `{GATEWAY_HOME}/conf/topologies/{cluster-name}.xml`).
+
+Note: The ports 50070 (9870 for Hadoop 3.x), 50111, 11000, 8080 and 10001 are the defaults for WebHDFS, WebHCat, Oozie, HBase and Hive respectively.
+Their values can also be provided via the cluster topology descriptor if your Hadoop cluster uses different ports.
+
+Note: The HBase REST API uses port 8080 by default. This often clashes with other running services.
+In the Hortonworks Sandbox, Apache Ambari might be running on this port, so you might have to change it to a different port (e.g. 60080).
+
+<<book_topology_port_mapping.md>>
+<<config.md>>
+<<knox_cli.md>>
+<<admin_api.md>>
+<<x-forwarded-headers.md>>
+<<config_metrics.md>>
+<<config_authn.md>>
+<<config_advanced_ldap.md>>
+<<config_ldap_authc_cache.md>>
+<<config_ldap_group_lookup.md>>
+<<config_pam_authn.md>>
+<<config_id_assertion.md>>
+<<config_authz.md>>
+<<config_kerberos.md>>
+<<config_ha.md>>
+<<config_webappsec_provider.md>>
+<<config_hadoop_auth_provider.md>>
+<<config_preauth_sso_provider.md>>
+<<config_sso_cookie_provider.md>>
+<<config_pac4j_provider.md>>
+<<config_knox_sso.md>>
+<<config_knox_token.md>>
+<<config_mutual_authentication_ssl.md>>
+<<config_tls_client_certificate_authentication_provider.md>>
+<<websocket-support.md>>
+<<config_audit.md>>

Added: knox/trunk/books/1.4.0/book_getting-started.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/book_getting-started.md?rev=1863668&view=auto
==============================================================================
--- knox/trunk/books/1.4.0/book_getting-started.md (added)
+++ knox/trunk/books/1.4.0/book_getting-started.md Tue Jul 23 21:27:15 2019
@@ -0,0 +1,95 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+## Apache Knox Details ##
+
+This section provides everything you need to know to get the Knox gateway up and running against a Hadoop cluster.
+
+#### Hadoop ####
+
+An existing Hadoop 2.x or 3.x cluster is required for Knox to sit in front of and protect.
+It is possible to use a Hadoop cluster deployed on EC2 but this will require additional configuration not covered here.
+It is also possible to protect access to a services of a Hadoop cluster that is secured with Kerberos.
+This too requires additional configuration that is described in other sections of this guide.
+See #[Supported Services] for details on what is supported for this release.
+
+The instructions that follow assume a few things:
+
+1. The gateway is *not* collocated with the Hadoop clusters themselves.
+2. The host names and IP addresses of the cluster services are accessible by the gateway where ever it happens to be running.
+
+All of the instructions and samples provided here are tailored and tested to work "out of the box" against a [Hortonworks Sandbox 2.x VM][sandbox].
+
+
+#### Apache Knox Directory Layout ####
+
+Knox can be installed by expanding the zip/archive file.
+
+The table below provides a brief explanation of the important files and directories within `{GATEWAY_HOME}`
+
+| Directory                | Purpose |
+| ------------------------ | ------- |
+| conf/                    | Contains configuration files that apply to the gateway globally (i.e. not cluster specific ). |
+| data/                    | Contains security and topology specific artifacts that require read/write access at runtime |
+| conf/topologies/         | Contains topology files that represent Hadoop clusters which the gateway uses to deploy cluster proxies |
+| data/security/           | Contains the persisted master secret and keystore dir |
+| data/security/keystores/ | Contains the gateway identity keystore and credential stores for the gateway and each deployed cluster topology |
+| data/services            | Contains service behavior definitions for the services currently supported. |
+| bin/                     | Contains the executable shell scripts, batch files and JARs for clients and servers. |
+| data/deployments/        | Contains deployed cluster topologies used to protect access to specific Hadoop clusters. |
+| lib/                     | Contains the JARs for all the components that make up the gateway. |
+| dep/                     | Contains the JARs for all of the components upon which the gateway depends. |
+| ext/                     | A directory where user supplied extension JARs can be placed to extends the gateways functionality. |
+| pids/                    | Contains the process ids for running LDAP and gateway servers |
+| samples/                 | Contains a number of samples that can be used to explore the functionality of the gateway. |
+| templates/               | Contains default configuration files that can be copied and customized. |
+| README                   | Provides basic information about the Apache Knox Gateway. |
+| ISSUES                   | Describes significant know issues. |
+| CHANGES                  | Enumerates the changes between releases. |
+| LICENSE                  | Documents the license under which this software is provided. |
+| NOTICE                   | Documents required attribution notices for included dependencies. |
+
+
+### Supported Services ###
+
+This table enumerates the versions of various Hadoop services that have been tested to work with the Knox Gateway.
+
+| Service                | Version     | Non-Secure  | Secure | HA |
+| -----------------------|-------------|-------------|--------|----|
+| WebHDFS                | 2.4.0       | ![y]        | ![y]   |![y]|
+| WebHCat/Templeton      | 0.13.0      | ![y]        | ![y]   |![y]|
+| Oozie                  | 4.0.0       | ![y]        | ![y]   |![y]|
+| HBase                  | 0.98.0      | ![y]        | ![y]   |![y]|
+| Hive (via WebHCat)     | 0.13.0      | ![y]        | ![y]   |![y]|
+| Hive (via JDBC/ODBC)   | 0.13.0      | ![y]        | ![y]   |![y]|
+| Yarn ResourceManager   | 2.5.0       | ![y]        | ![y]   |![n]|
+| Kafka (via REST Proxy) | 0.10.0      | ![y]        | ![y]   |![y]|
+| Storm                  | 0.9.3       | ![y]        | ![n]   |![n]|
+| Solr                   | 5.5+ and 6+ | ![y]        | ![y]   |![y]|
+
+
+### More Examples ###
+
+These examples provide more detail about how to access various Apache Hadoop services via the Apache Knox Gateway.
+
+* #[WebHDFS Examples]
+* #[WebHCat Examples]
+* #[Oozie Examples]
+* #[HBase Examples]
+* #[Hive Examples]
+* #[Yarn Examples]
+* #[Storm Examples]

Added: knox/trunk/books/1.4.0/book_knox-samples.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/book_knox-samples.md?rev=1863668&view=auto
==============================================================================
--- knox/trunk/books/1.4.0/book_knox-samples.md (added)
+++ knox/trunk/books/1.4.0/book_knox-samples.md Tue Jul 23 21:27:15 2019
@@ -0,0 +1,69 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### Gateway Samples ###
+
+The purpose of the samples within the `{GATEWAY_HOME}/samples` directory is to demonstrate the capabilities of the Apache Knox Gateway to provide access to the numerous APIs that are available from the service components of a Hadoop cluster.
+
+Depending on exactly how your Knox installation was done, there will be some number of steps required in order fully install and configure the samples for use.
+
+This section will help describe the assumptions of the samples and the steps to get them to work in a couple of different deployment scenarios.
+
+#### Assumptions of the Samples ####
+
+The samples were initially written with the intent of working out of the box for the various Hadoop demo environments that are deployed as a single node cluster inside of a VM. The following assumptions were made from that context and should be understood in order to get the samples to work in other deployment scenarios:
+
+* That there is a valid java JDK on the PATH for executing the samples
+* The Knox Demo LDAP server is running on localhost and port 33389 which is the default port for the ApacheDS LDAP server.
+* That the LDAP directory in use has a set of demo users provisioned with the convention of username and username"-password" as the password. Most of the samples have some variation of this pattern with "guest" and "guest-password".
+* That the Knox Gateway instance is running on the same machine which you will be running the samples from - therefore "localhost" and that the default port of "8443" is being used.
+* Finally, that there is a properly provisioned sandbox.xml topology in the `{GATEWAY_HOME}/conf/topologies` directory that is configured to point to the actual host and ports of running service components.
+
+#### Steps for Demo Single Node Clusters ####
+
+There should be little to do if anything in a demo environment that has been provisioned with illustrating the use of Apache Knox.
+
+However, the following items will be worth ensuring before you start:
+
+1. The `sandbox.xml` topology is configured properly for the deployed services
+2. That there is a LDAP server running with guest/guest-password user available in the directory
+
+#### Steps for Ambari deployed Knox Gateway ####
+
+Apache Knox instances that are under the management of Ambari are generally assumed not to be demo instances. These instances are in place to facilitate development, testing or production Hadoop clusters.
+
+The Knox samples can however be made to work with Ambari managed Knox instances with a few steps:
+
+1. You need to have SSH access to the environment in order for the localhost assumption within the samples to be valid
+2. The Knox Demo LDAP Server is started - you can start it from Ambari
+3. The `default.xml` topology file can be copied to `sandbox.xml` in order to satisfy the topology name assumption in the samples
+4. Be sure to use an actual Java JRE to run the sample with something like:
+
+    /usr/jdk64/jdk1.7.0_67/bin/java -jar bin/shell.jar samples/ExampleWebHdfsLs.groovy
+
+#### Steps for a manually installed Knox Gateway ####
+
+For manually installed Knox instances, there is really no way for the installer to know how to configure the topology file for you.
+
+Essentially, these steps are identical to the Ambari deployed instance except that #3 should be replaced with the configuration of the out of the box `sandbox.xml` to point the configuration at the proper hosts and ports.
+
+1. You need to have SSH access to the environment in order for the localhost assumption within the samples to be valid.
+2. The Knox Demo LDAP Server is started - you can start it from Ambari
+3. Change the hosts and ports within the `{GATEWAY_HOME}/conf/topologies/sandbox.xml` to reflect your actual cluster service locations.
+4. Be sure to use an actual Java JRE to run the sample with something like:
+
+    /usr/jdk64/jdk1.7.0_67/bin/java -jar bin/shell.jar samples/ExampleWebHdfsLs.groovy

Added: knox/trunk/books/1.4.0/book_limitations.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/book_limitations.md?rev=1863668&view=auto
==============================================================================
--- knox/trunk/books/1.4.0/book_limitations.md (added)
+++ knox/trunk/books/1.4.0/book_limitations.md Tue Jul 23 21:27:15 2019
@@ -0,0 +1,39 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+## Limitations ##
+
+
+### Secure Oozie POST/PUT Request Payload Size Restriction ###
+
+With one exception there are no known size limits for requests or responses payloads that pass through the gateway.
+The exception involves POST or PUT request payload sizes for Oozie in a Kerberos secured Hadoop cluster.
+In this one case there is currently a 4Kb payload size limit for the first request made to the Hadoop cluster.
+This is a result of how the gateway negotiates a trust relationship between itself and the cluster via SPNEGO.
+There is an undocumented configuration setting to modify this limit's value if required.
+In the future this will be made more easily configurable and at that time it will be documented.
+
+### Group Membership Propagation ###
+
+Groups that are acquired via Shiro Group Lookup and/or Identity Assertion Group Principal Mapping are not propagated to the Hadoop services.
+Therefore, groups used for Service Level Authorization policy may not match those acquired within the cluster via GroupMappingServiceProvider plugins.
+
+### Knox Consumer Restriction ###
+
+Consumption of messages via Knox at this time is not supported.  The Confluent Kafka REST Proxy that Knox relies upon is stateful when used for
+consumption of messages.
+

Added: knox/trunk/books/1.4.0/book_service-details.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/book_service-details.md?rev=1863668&view=auto
==============================================================================
--- knox/trunk/books/1.4.0/book_service-details.md (added)
+++ knox/trunk/books/1.4.0/book_service-details.md Tue Jul 23 21:27:15 2019
@@ -0,0 +1,98 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+## Service Details ##
+
+In the sections that follow, the integrations currently available out of the box with the gateway will be described.
+In general these sections will include examples that demonstrate how to access each of these services via the gateway.
+In many cases this will include both the use of [cURL][curl] as a REST API client as well as the use of the Knox Client DSL.
+You may notice that there are some minor differences between using the REST API of a given service via the gateway.
+In general this is necessary in order to achieve the goal of not leaking internal Hadoop cluster details to the client.
+
+Keep in mind that the gateway uses a plugin model for supporting Hadoop services.
+Check back with the [Apache Knox][site] site for the latest news on plugin availability.
+You can also create your own custom plugin to extend the capabilities of the gateway.
+
+These are the current Hadoop services with built-in support.
+
+* #[WebHDFS]
+* #[WebHCat]
+* #[Oozie]
+* #[HBase]
+* #[Hive]
+* #[Yarn]
+* #[Kafka]
+* #[Storm]
+* #[Solr]
+* #[Avatica]
+* #[Livy Server]
+* #[Elasticsearch]
+
+### Assumptions
+
+This document assumes a few things about your environment in order to simplify the examples.
+
+* The JVM is executable as simply `java`.
+* The Apache Knox Gateway is installed and functional.
+* The example commands are executed within the context of the `GATEWAY_HOME` current directory.
+The `GATEWAY_HOME` directory is the directory within the Apache Knox Gateway installation that contains the README file and the bin, conf and deployments directories.
+* The [cURL][curl] command line HTTP client utility is installed and functional.
+* A few examples optionally require the use of commands from a standard Groovy installation.
+These examples are optional but to try them you will need Groovy [installed](http://groovy.codehaus.org/Installing+Groovy).
+* The default configuration for all of the samples is setup for use with Hortonworks' [Sandbox][sandbox] version 2.
+
+### Customization
+
+Using these samples with other Hadoop installations will require changes to the steps described here as well as changes to referenced sample scripts.
+This will also likely require changes to the gateway's default configuration.
+In particular host names, ports, user names and password may need to be changed to match your environment.
+These changes may need to be made to gateway configuration and also the Groovy sample script files in the distribution.
+All of the values that may need to be customized in the sample scripts can be found together at the top of each of these files.
+
+### cURL
+
+The cURL HTTP client command line utility is used extensively in the examples for each service.
+In particular this form of the cURL command line is used repeatedly.
+
+    curl -i -k -u guest:guest-password ...
+
+The option `-i` (aka `--include`) is used to output HTTP response header information.
+This will be important when the content of the HTTP Location header is required for subsequent requests.
+
+The option `-k` (aka `--insecure`) is used to avoid any issues resulting from the use of demonstration SSL certificates.
+
+The option `-u` (aka `--user`) is used to provide the credentials to be used when the client is challenged by the gateway.
+
+Keep in mind that the samples do not use the cookie features of cURL for the sake of simplicity.
+Therefore each request via cURL will result in an authentication.
+
+<<service_webhdfs.md>>
+<<service_webhcat.md>>
+<<service_oozie.md>>
+<<service_hbase.md>>
+<<service_hive.md>>
+<<service_yarn.md>>
+<<service_kafka.md>>
+<<service_storm.md>>
+<<service_solr.md>>
+<<service_config.md>>
+<<service_default_ha.md>>
+<<service_avatica.md>>
+<<service_livy.md>>
+<<service_elasticsearch.md>>
+<<service_ssl_certificate_trust.md>>
+<<service_service_test.md>>

Added: knox/trunk/books/1.4.0/book_topology_port_mapping.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/book_topology_port_mapping.md?rev=1863668&view=auto
==============================================================================
--- knox/trunk/books/1.4.0/book_topology_port_mapping.md (added)
+++ knox/trunk/books/1.4.0/book_topology_port_mapping.md Tue Jul 23 21:27:15 2019
@@ -0,0 +1,35 @@
+#### Topology Port Mapping #####
+This feature allows mapping of a topology to a port, as a result one can have a specific topology listening on a configured port. This feature 
+routes URLs to these port-mapped topologies without the additional context that the gateway uses for differentiating from one Hadoop cluster to another,
+just like the #[Default Topology URLs] feature, but on a dedicated port. 
+
+The configuration for Topology Port Mapping goes in `gateway-site.xml` file. The configuration uses the property name and value model
+to configure the settings for this feature. The format for the property name is `gateway.port.mapping.{topologyName}` and value is the port number that this
+topology would listen on. 
+
+In the following example, the topology `development` will listen on 9443 (if the port is not already taken).
+
+      <property>
+          <name>gateway.port.mapping.development</name>
+          <value>9443</value>
+          <description>Topology and Port mapping</description>
+      </property>
+
+An example of how one can access WebHDFS URL using the above configuration is
+
+     https://{gateway-host}:9443/webhdfs
+     https://{gateway-host}:9443/{gateway-path}/development/webhdfs
+     https://{gateway-host}:{gateway-port}/{gateway-path}/development/webhdfs
+
+All of the above URL will be valid URLs for the above described configuration.
+
+This feature is turned on by default, to turn it off use the property `gateway.port.mapping.enabled`. 
+e.g.
+
+     <property>
+         <name>gateway.port.mapping.enabled</name>
+         <value>false</value>
+         <description>Enable/Disable port mapping feature.</description>
+     </property>
+
+If a topology mapped port is in use by another topology or process then an ERROR message is logged and gateway startup continues as normal.

Added: knox/trunk/books/1.4.0/book_troubleshooting.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/book_troubleshooting.md?rev=1863668&view=auto
==============================================================================
--- knox/trunk/books/1.4.0/book_troubleshooting.md (added)
+++ knox/trunk/books/1.4.0/book_troubleshooting.md Tue Jul 23 21:27:15 2019
@@ -0,0 +1,320 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+## Troubleshooting ##
+
+### Finding Logs ###
+
+When things aren't working the first thing you need to do is examine the diagnostic logs.
+Depending upon how you are running the gateway these diagnostic logs will be output to different locations.
+
+#### java -jar bin/gateway.jar ####
+
+When the gateway is run this way the diagnostic output is written directly to the console.
+If you want to capture that output you will need to redirect the console output to a file using OS specific techniques.
+
+    java -jar bin/gateway.jar > gateway.log
+
+#### bin/gateway.sh start ####
+
+When the gateway is run this way the diagnostic output is written to `{GATEWAY_HOME}/log/knox.out` and `{GATEWAY_HOME}/log/knox.err`.
+Typically only knox.out will have content.
+
+
+### Increasing Logging ###
+
+The `log4j.properties` files `{GATEWAY_HOME}/conf` can be used to change the granularity of the logging done by Knox.
+The Knox server must be restarted in order for these changes to take effect.
+There are various useful loggers pre-populated but commented out.
+
+    log4j.logger.org.apache.knox.gateway=DEBUG # Use this logger to increase the debugging of Apache Knox itself.
+    log4j.logger.org.apache.shiro=DEBUG          # Use this logger to increase the debugging of Apache Shiro.
+    log4j.logger.org.apache.http=DEBUG           # Use this logger to increase the debugging of Apache HTTP components.
+    log4j.logger.org.apache.http.client=DEBUG    # Use this logger to increase the debugging of Apache HTTP client component.
+    log4j.logger.org.apache.http.headers=DEBUG   # Use this logger to increase the debugging of Apache HTTP header.
+    log4j.logger.org.apache.http.wire=DEBUG      # Use this logger to increase the debugging of Apache HTTP wire traffic.
+
+
+### LDAP Server Connectivity Issues ###
+
+If the gateway cannot contact the configured LDAP server you will see errors in the gateway diagnostic output.
+
+    13/11/15 16:30:17 DEBUG authc.BasicHttpAuthenticationFilter: Attempting to execute login with headers [Basic Z3Vlc3Q6Z3Vlc3QtcGFzc3dvcmQ=]
+    13/11/15 16:30:17 DEBUG ldap.JndiLdapRealm: Authenticating user 'guest' through LDAP
+    13/11/15 16:30:17 DEBUG ldap.JndiLdapContextFactory: Initializing LDAP context using URL 	[ldap://localhost:33389] and principal [uid=guest,ou=people,dc=hadoop,dc=apache,dc=org] with pooling disabled
+    13/11/15 16:30:17 DEBUG servlet.SimpleCookie: Added HttpServletResponse Cookie [rememberMe=deleteMe; Path=/gateway/vaultservice; Max-Age=0; Expires=Thu, 14-Nov-2013 21:30:17 GMT]
+    13/11/15 16:30:17 DEBUG authc.BasicHttpAuthenticationFilter: Authentication required: sending 401 Authentication challenge response.
+
+The client should see something along the lines of:
+
+    HTTP/1.1 401 Unauthorized
+    WWW-Authenticate: BASIC realm="application"
+    Content-Length: 0
+    Server: Jetty(8.1.12.v20130726)
+
+Resolving this will require ensuring that the LDAP server is running and that connection information is correct.
+The LDAP server connection information is configured in the cluster's topology file (e.g. {GATEWAY_HOME}/deployments/sandbox.xml).
+
+
+### Hadoop Cluster Connectivity Issues ###
+
+If the gateway cannot contact one of the services in the configured Hadoop cluster you will see errors in the gateway diagnostic output.
+
+    13/11/18 18:49:45 WARN knox.gateway: Connection exception dispatching request: http://localhost:50070/webhdfs/v1/?user.name=guest&op=LISTSTATUS org.apache.http.conn.HttpHostConnectException: Connection to http://localhost:50070 refused
+    org.apache.http.conn.HttpHostConnectException: Connection to http://localhost:50070 refused
+      at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:190)
+      at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
+      at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
+      at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
+      at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
+      at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
+      at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
+      at org.apache.knox.gateway.dispatch.HttpClientDispatch.executeRequest(HttpClientDispatch.java:99)
+
+The resulting behavior on the client will differ by client.
+For the client DSL executing the `{GATEWAY_HOME}/samples/ExampleWebHdfsLs.groovy` the output will look like this.
+
+    Caught: org.apache.knox.gateway.shell.HadoopException: org.apache.knox.gateway.shell.ErrorResponse: HTTP/1.1 500 Server Error
+    org.apache.knox.gateway.shell.HadoopException: org.apache.knox.gateway.shell.ErrorResponse: HTTP/1.1 500 Server Error
+      at org.apache.knox.gateway.shell.AbstractRequest.now(AbstractRequest.java:72)
+      at org.apache.knox.gateway.shell.AbstractRequest$now.call(Unknown Source)
+      at ExampleWebHdfsLs.run(ExampleWebHdfsLs.groovy:28)
+
+When executing commands requests via cURL the output might look similar to the following example.
+
+    Set-Cookie: JSESSIONID=16xwhpuxjr8251ufg22f8pqo85;Path=/gateway/sandbox;Secure
+    Content-Type: text/html;charset=ISO-8859-1
+    Cache-Control: must-revalidate,no-cache,no-store
+    Content-Length: 21856
+    Server: Jetty(8.1.12.v20130726)
+
+    <html>
+    <head>
+    <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
+    <title>Error 500 Server Error</title>
+    </head>
+    <body><h2>HTTP ERROR 500</h2>
+
+Resolving this will require ensuring that the Hadoop services are running and that connection information is correct.
+Basic Hadoop connectivity can be evaluated using cURL as described elsewhere.
+Otherwise the Hadoop cluster connection information is configured in the cluster's topology file (e.g. `{GATEWAY_HOME}/deployments/sandbox.xml`).
+
+### HTTP vs HTTPS protocol issues ###
+When Knox is configured to accept requests over SSL and is presented with a request over plain HTTP, the client is presented with an error such as seen in the following:
+
+    curl -i -k -u guest:guest-password -X GET 'http://localhost:8443/gateway/sandbox/webhdfs/v1/?op=LISTSTATUS'
+    the following error is returned
+    curl: (52) Empty reply from server
+
+This is the default behavior for Jetty SSL listener. While the credentials to the default authentication provider continue to be username and password, we do not want to encourage sending these in clear text. Since preemptively sending BASIC credentials is a common pattern with REST APIs it would be unwise to redirect to a HTTPS listener thus allowing clear text passwords.
+
+To resolve this issue, we have two options:
+
+1. change the scheme in the URL to https and deal with any trust relationship issues with the presented server certificate
+2. Disabling SSL in gateway-site.xml - this is not encouraged due to the reasoning described above.
+
+### Check Hadoop Cluster Access via cURL ###
+
+When you are experiencing connectivity issue it can be helpful to "bypass" the gateway and invoke the Hadoop REST APIs directly.
+This can easily be done using the cURL command line utility or many other REST/HTTP clients.
+Exactly how to use cURL depends on the configuration of your Hadoop cluster.
+In general however you will use a command line the one that follows.
+
+    curl -ikv -X GET 'http://namenode-host:50070/webhdfs/v1/?op=LISTSTATUS'
+
+If you are using Sandbox the WebHDFS or NameNode port will be mapped to localhost so this command can be used.
+
+    curl -ikv -X GET 'http://localhost:50070/webhdfs/v1/?op=LISTSTATUS'
+
+If you are using a cluster secured with Kerberos you will need to have used `kinit` to authenticate to the KDC.
+Then the command below should verify that WebHDFS in the Hadoop cluster is accessible.
+
+    curl -ikv --negotiate -u : -X 'http://localhost:50070/webhdfs/v1/?op=LISTSTATUS'
+
+
+### Authentication Issues ###
+The following log information is available when you enable debug level logging for shiro. This can be done within the conf/log4j.properties file. Not the "Password not correct for user" message.
+
+    13/11/15 16:37:15 DEBUG authc.BasicHttpAuthenticationFilter: Attempting to execute login with headers [Basic Z3Vlc3Q6Z3Vlc3QtcGFzc3dvcmQw]
+    13/11/15 16:37:15 DEBUG ldap.JndiLdapRealm: Authenticating user 'guest' through LDAP
+    13/11/15 16:37:15 DEBUG ldap.JndiLdapContextFactory: Initializing LDAP context using URL [ldap://localhost:33389] and principal [uid=guest,ou=people,dc=hadoop,dc=apache,dc=org] with pooling disabled
+    2013-11-15 16:37:15,899 INFO  Password not correct for user 'uid=guest,ou=people,dc=hadoop,dc=apache,dc=org'
+    2013-11-15 16:37:15,899 INFO  Authenticator org.apache.directory.server.core.authn.SimpleAuthenticator@354c78e3 failed to authenticate: BindContext for DN 'uid=guest,ou=people,dc=hadoop,dc=apache,dc=org', credentials <0x67 0x75 0x65 0x73 0x74 0x2D 0x70 0x61 0x73 0x73 0x77 0x6F 0x72 0x64 0x30 >
+    2013-11-15 16:37:15,899 INFO  Cannot bind to the server
+    13/11/15 16:37:15 DEBUG servlet.SimpleCookie: Added HttpServletResponse Cookie [rememberMe=deleteMe; Path=/gateway/vaultservice; Max-Age=0; Expires=Thu, 14-Nov-2013 21:37:15 GMT]
+    13/11/15 16:37:15 DEBUG authc.BasicHttpAuthenticationFilter: Authentication required: sending 401 Authentication challenge response.
+
+The client will likely see something along the lines of:
+
+    HTTP/1.1 401 Unauthorized
+    WWW-Authenticate: BASIC realm="application"
+    Content-Length: 0
+    Server: Jetty(8.1.12.v20130726)
+
+#### Using ldapsearch to verify LDAP connectivity and credentials
+
+If your authentication to Knox fails and you believe you're using correct credentials, you could try to verify the connectivity and credentials using ldapsearch, assuming you are using LDAP directory for authentication.
+
+Assuming you are using the default values that came out of box with Knox, your ldapsearch command would be like the following
+
+    ldapsearch -h localhost -p 33389 -D "uid=guest,ou=people,dc=hadoop,dc=apache,dc=org" -w guest-password -b "uid=guest,ou=people,dc=hadoop,dc=apache,dc=org" "objectclass=*"
+
+This should produce output like the following
+
+    # extended LDIF
+    
+    LDAPv3
+    base <uid=guest,ou=people,dc=hadoop,dc=apache,dc=org> with scope subtree
+    filter: objectclass=*
+    requesting: ALL
+    
+    
+    # guest, people, hadoop.apache.org
+    dn: uid=guest,ou=people,dc=hadoop,dc=apache,dc=org
+    objectClass: organizationalPerson
+    objectClass: person
+    objectClass: inetOrgPerson
+    objectClass: top
+    uid: guest
+    cn: Guest
+    sn: User
+    userpassword:: Z3Vlc3QtcGFzc3dvcmQ=
+    
+    # search result
+    search: 2
+    result: 0 Success
+    
+    # numResponses: 2
+    # numEntries: 1
+
+In a more general form the ldapsearch command would be
+
+    ldapsearch -h {HOST} -p {PORT} -D {DN of binding user} -w {bind password} -b {DN of binding user} "objectclass=*}
+
+### Hostname Resolution Issues ###
+
+The deployments/sandbox.xml topology file has the host mapping feature enabled.
+This is required due to the way networking is setup in the Sandbox VM.
+Specifically the VM's internal hostname is sandbox.hortonworks.com.
+Since this hostname cannot be resolved to the actual VM Knox needs to map that hostname to something resolvable.
+
+If for example host mapping is disabled but the Sandbox VM is still used you will see an error in the diagnostic output similar to the below.
+
+    13/11/18 19:11:35 WARN knox.gateway: Connection exception dispatching request: http://sandbox.hortonworks.com:50075/webhdfs/v1/user/guest/example/README?op=CREATE&namenoderpcaddress=sandbox.hortonworks.com:8020&user.name=guest&overwrite=false java.net.UnknownHostException: sandbox.hortonworks.com
+    java.net.UnknownHostException: sandbox.hortonworks.com
+      at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
+
+On the other hand if you are migrating from the Sandbox based configuration to a cluster you have deployment you may see a similar error.
+However in this case you may need to disable host mapping.
+This can be done by modifying the topology file (e.g. deployments/sandbox.xml) for the cluster.
+
+    ...
+    <provider>
+        <role>hostmap</role>
+        <name>static</name>
+        <enabled>false</enabled>
+        <param><name>localhost</name><value>sandbox,sandbox.hortonworks.com</value></param>
+    </provider>
+    ....
+
+
+### Job Submission Issues - HDFS Home Directories ###
+
+If you see error like the following in your console  while submitting a Job using groovy shell, it is likely that the authenticated user does not have a home directory on HDFS.
+
+    Caught: org.apache.knox.gateway.shell.HadoopException: org.apache.knox.gateway.shell.ErrorResponse: HTTP/1.1 403 Forbidden
+    org.apache.knox.gateway.shell.HadoopException: org.apache.knox.gateway.shell.ErrorResponse: HTTP/1.1 403 Forbidden
+
+You would also see this error if you try file operation on the home directory of the authenticating user.
+
+The error would look a little different as shown below  if you are attempting to the operation with cURL.
+
+    {"RemoteException":{"exception":"AccessControlException","javaClassName":"org.apache.hadoop.security.AccessControlException","message":"Permission denied: user=tom, access=WRITE, inode=\"/user\":hdfs:hdfs:drwxr-xr-x"}}* 
+
+#### Resolution
+
+Create the home directory for the user on HDFS.
+The home directory is typically of the form `/user/{userid}` and should be owned by the user.
+user 'hdfs' can create such a directory and make the user owner of the directory.
+
+
+### Job Submission Issues - OS Accounts ###
+
+If the Hadoop cluster is not secured with Kerberos, the user submitting a job need not have an OS account on the Hadoop NodeManagers.
+
+If the Hadoop cluster is secured with Kerberos, the user submitting the job should have an OS account on Hadoop NodeManagers.
+
+In either case if the user does not have such OS account, his file permissions are based on user ownership of files or "other" permission in "ugo" posix permission.
+The user does not get any file permission as a member of any group if you are using default `hadoop.security.group.mapping`.
+
+TODO: add sample error message from running test on secure cluster with missing OS account
+
+### HBase Issues ###
+
+If you experience problems running the HBase samples with the Sandbox VM it may be necessary to restart HBase and the HBASE REST API.
+This can sometimes occur with the Sandbox VM is restarted from a saved state.
+If the client hangs after emitting the last line in the sample output below you are most likely affected.
+
+    System version : {...}
+    Cluster version : 0.96.0.2.0.6.0-76-hadoop2
+    Status : {...}
+    Creating table 'test_table'...
+
+HBase and the HBASE REST API can be restarted using the following commands on the Hadoop Sandbox VM.
+You will need to ssh into the VM in order to run these commands.
+
+    sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh stop master
+    sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh start master
+    sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh restart rest
+
+
+### SSL Certificate Issues ###
+
+Clients that do not trust the certificate presented by the server will behave in different ways.
+A browser will typically warn you of the inability to trust the received certificate and give you an opportunity to add an exception for the particular certificate.
+Curl will present you with the follow message and instructions for turning of certificate verification:
+
+    curl performs SSL certificate verification by default, using a "bundle" 
+     of Certificate Authority (CA) public keys (CA certs). If the default
+     bundle file isn't adequate, you can specify an alternate file
+     using the --cacert option.
+    If this HTTPS server uses a certificate signed by a CA represented 
+     the bundle, the certificate verification probably failed due to a
+     problem with the certificate (it might be expired, or the name might
+     not match the domain name in the URL).
+    If you'd like to turn off curl's verification of the certificate, use
+     the -k (or --insecure) option.
+
+
+### SPNego Authentication Issues ###
+
+Calls from Knox to Secure Hadoop Cluster fails, with SPNego authentication problems,
+if there was a TGT for Knox in disk cache when Knox was started.
+
+You are likely to run into this situation on developer machines where the developer could have kinited for some testing.
+
+Work Around: clear TGT of Knox from disk cache (calling `kdestroy` would do it), before starting Knox
+
+### Filing Bugs ###
+
+Bugs can be filed using [Jira][jira].
+Please include the results of this command below in the Environment section.
+Also include the version of Hadoop being used in the same section.
+
+    cd {GATEWAY_HOME}
+    java -jar bin/gateway.jar -version
+