You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@knox.apache.org by mo...@apache.org on 2018/07/03 19:13:37 UTC

svn commit: r1835012 [3/5] - in /knox: site/books/knox-0-10-0/ site/books/knox-0-11-0/ site/books/knox-0-12-0/ site/books/knox-0-13-0/ site/books/knox-0-14-0/ site/books/knox-0-3-0/ site/books/knox-0-4-0/ site/books/knox-0-5-0/ site/books/knox-0-6-0/ s...

Modified: knox/trunk/books/1.1.0/admin_ui.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/admin_ui.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/admin_ui.md (original)
+++ knox/trunk/books/1.1.0/admin_ui.md Tue Jul  3 19:13:36 2018
@@ -28,7 +28,7 @@ Furthermore, using the Admin UI simplifi
 The URL mapping for the Knox Admin UI is:
 
 | ------- | ----------------------------------------------------------------------------------------------  |
-| Gateway | `https://{gateway-host}:{gateway-port}/{gateway-path}/manager/admin-ui/`        				|   
+| Gateway | `https://{gateway-host}:{gateway-port}/{gateway-path}/manager/admin-ui/` |   
 
 
 ##### Authentication

Modified: knox/trunk/books/1.1.0/book.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/book.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/book.md (original)
+++ knox/trunk/books/1.1.0/book.md Tue Jul  3 19:13:36 2018
@@ -42,7 +42,7 @@
         * #[Externalized Provider Configurations]
         * #[Sharing HA Providers]
         * #[Simplified Descriptor Files]
-		* #[Cluster Configuration Monitoring]
+    * #[Cluster Configuration Monitoring]
         * #[Remote Configuration Monitor]
         * #[Remote Configuration Registry Clients]
         * #[Remote Alias Discovery]

Modified: knox/trunk/books/1.1.0/book_client-details.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/book_client-details.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/book_client-details.md (original)
+++ knox/trunk/books/1.1.0/book_client-details.md Tue Jul  3 19:13:36 2018
@@ -27,7 +27,7 @@ The following sections provide an overvi
 ### Client Quickstart ###
 The following installation and setup instructions should get you started with using the KnoxShell very quickly.
 
-1. Download a knoxshell-x.x.x.zip or tar file and unzip it in your preferred location {GATEWAY_CLIENT_HOME}
+1. Download a knoxshell-x.x.x.zip or tar file and unzip it in your preferred location `{GATEWAY_CLIENT_HOME}`
 
         home:knoxshell-0.12.0 larry$ ls -l
         total 296
@@ -46,17 +46,17 @@ The following installation and setup ins
     |logs         |contains the knoxshell.log file|
     |samples      |has numerous examples to help you get started|
 
-2. cd {GATEWAY_CLIENT_HOME}
+2. cd `{GATEWAY_CLIENT_HOME}`
 3. Get/setup truststore for the target Knox instance or fronting load balancer
-    - if you have access to the server you may use the command knoxcli.sh export-cert --type JKS
-    - copy the resulting gateway-client-identity.jks to your user home directory
-4. Execute the an example script from the {GATEWAY_CLIENT_HOME}/samples directory - for instance:
-    - bin/knoxshell.sh samples/ExampleWebHdfsLs.groovy
+    - if you have access to the server you may use the command `knoxcli.sh export-cert --type JKS`
+    - copy the resulting `gateway-client-identity.jks` to your user home directory
+4. Execute the an example script from the `{GATEWAY_CLIENT_HOME}/samples` directory - for instance:
+    - `bin/knoxshell.sh samples/ExampleWebHdfsLs.groovy`
     
-        home:knoxshell-0.12.0 larry$ bin/knoxshell.sh samples/ExampleWebHdfsLs.groovy
-        Enter username: guest
-        Enter password:
-        [app-logs, apps, mapred, mr-history, tmp, user]
+            home:knoxshell-0.12.0 larry$ bin/knoxshell.sh samples/ExampleWebHdfsLs.groovy
+            Enter username: guest
+            Enter password:
+            [app-logs, apps, mapred, mr-history, tmp, user]
 
 At this point, you should have seen something similar to the above output - probably with different directories listed. You should get the idea from the above. Take a look at the sample that we ran above:
 
@@ -85,11 +85,11 @@ At this point, you should have seen some
 
 Some things to note about this sample:
 
-1. the gateway URL is hardcoded
-    - alternatives would be passing it as an argument to the script, using an environment variable or prompting for it with a ClearInput credential collector
-2. credential collectors are used to gather credentials or other input from various sources. In this sample the HiddenInput and ClearInput collectors prompt the user for the input with the provided prompt text and the values are acquired by a subsequent get call with the provided name value.
+1. The gateway URL is hardcoded
+    - Alternatives would be passing it as an argument to the script, using an environment variable or prompting for it with a ClearInput credential collector
+2. Credential collectors are used to gather credentials or other input from various sources. In this sample the HiddenInput and ClearInput collectors prompt the user for the input with the provided prompt text and the values are acquired by a subsequent get call with the provided name value.
 3. The Hadoop.login method establishes a login session of sorts which will need to be provided to the various API classes as an argument.
-4. the response text is easily retrieved as a string and can be parsed by the JsonSlurper or whatever you like
+4. The response text is easily retrieved as a string and can be parsed by the JsonSlurper or whatever you like
 
 ### Client Token Sessions ###
 Building on the Quickstart above we will drill into some of the token session details here and walk through another sample.
@@ -97,7 +97,7 @@ Building on the Quickstart above we will
 Unlike the quickstart, token sessions require the server to be configured in specific ways to allow the use of token sessions/federation.
 
 #### Server Setup ####
-1. KnoxToken service should be added to your sandbox.xml topology - see the [KnoxToken Configuration Section] (#KnoxToken+Configuration)
+1. KnoxToken service should be added to your `sandbox.xml` topology - see the [KnoxToken Configuration Section] (#KnoxToken+Configuration)
 
         <service>
            <role>KNOXTOKEN</role>
@@ -115,7 +115,7 @@ Unlike the quickstart, token sessions re
            </param>
         </service>
 
-2. tokenbased.xml topology to accept tokens as federation tokens for access to exposed resources with JWTProvider [JWT Provider](#JWT+Provider)
+2. `tokenbased.xml` topology to accept tokens as federation tokens for access to exposed resources with JWTProvider [JWT Provider](#JWT+Provider)
 
         <provider>
            <role>federation</role>
@@ -126,6 +126,7 @@ Unlike the quickstart, token sessions re
                <value>tokenbased</value>
            </param>
         </provider>
+
 3. Use the KnoxShell token commands to establish and manage your session
     - bin/knoxshell.sh init https://localhost:8443/gateway/sandbox to acquire a token and cache in user home directory
     - bin/knoxshell.sh list to display the details of the cached token, the expiration time and optionally the target url
@@ -178,11 +179,11 @@ Unlike the quickstart, token sessions re
 
 Note the following about the above sample script:
 
-1. use of the KnoxToken credential collector
-2. use of the targetUrl from the credential collector
-3. optional override of the target url with environment variable
-4. the passing of the headers map to the session creation in Hadoop.login
-5. the passing of an argument for the ls command for the path to list or default to "/"
+1. Use of the KnoxToken credential collector
+2. Use of the targetUrl from the credential collector
+3. Optional override of the target url with environment variable
+4. The passing of the headers map to the session creation in Hadoop.login
+5. The passing of an argument for the ls command for the path to list or default to "/"
 
 Also note that there is no reason to prompt for username and password as long as the token has not been destroyed or expired.
 There is also no hardcoded endpoint for using the token - it is specified in the token cache or overridden by environment variable.
@@ -232,11 +233,11 @@ the certificate presented by the gateway
 access the gateway-identity cert. It can then be imported into cacerts on the client machine or put into a
 keystore that will be discovered in:
 
-* the user's home directory
-* in a directory specified in an environment variable: KNOX_CLIENT_TRUSTSTORE_DIR
-* in a directory specified with the above variable with the keystore filename specified in the variable: KNOX_CLIENT_TRUSTSTORE_FILENAME
-* default password "changeit" or password may be specified in environment variable: KNOX_CLIENT_TRUSTSTORE_PASS
-* or the JSSE system property: javax.net.ssl.trustStore can be used to specify its location
+* The user's home directory
+* In a directory specified in an environment variable: `KNOX_CLIENT_TRUSTSTORE_DIR`
+* In a directory specified with the above variable with the keystore filename specified in the variable: `KNOX_CLIENT_TRUSTSTORE_FILENAME`
+* Default password "changeit" or password may be specified in environment variable: `KNOX_CLIENT_TRUSTSTORE_PASS`
+* Or the JSSE system property `javax.net.ssl.trustStore` can be used to specify its location
 
 The DSL requires a shell to interpret the Groovy script.
 The shell can either be used interactively or to execute a script file.
@@ -273,7 +274,7 @@ Below is a very simple example of an int
 The `knox:000>` in the example above is the prompt from the embedded Groovy console.
 If you output doesn't look like this you may need to set the verbosity and show-last-result preferences as described above in the Usage section.
 
-If you recieve an error `HTTP/1.1 403 Forbidden` it may be because that file already exists.
+If you receive an error `HTTP/1.1 403 Forbidden` it may be because that file already exists.
 Try deleting it with the following command and then try again.
 
     knox:000> Hdfs.rm(session).file("/tmp/example/README").now()
@@ -345,13 +346,13 @@ Without this an error would result the s
 ### Futures ###
 
 The DSL supports the ability to invoke commands asynchronously via the later() invocation method.
-The object returned from the later() method is a java.util.concurrent.Future parameterized with the response type of the command.
+The object returned from the `later()` method is a `java.util.concurrent.Future` parameterized with the response type of the command.
 This is an example of how to asynchronously put a file to HDFS.
 
     future = Hdfs.put(session).file("README").to("/tmp/example/README").later()
     println future.get().statusCode
 
-The future.get() method will block until the asynchronous command is complete.
+The `future.get()` method will block until the asynchronous command is complete.
 To illustrate the usefulness of this however multiple concurrent commands are required.
 
     readmeFuture = Hdfs.put(session).file("README").to("/tmp/example/README").later()
@@ -360,7 +361,7 @@ To illustrate the usefulness of this how
     println readmeFuture.get().statusCode
     println licenseFuture.get().statusCode
 
-The session.waitFor() method will wait for one or more asynchronous commands to complete.
+The `session.waitFor()` method will wait for one or more asynchronous commands to complete.
 
 
 ### Closures ###
@@ -368,13 +369,13 @@ The session.waitFor() method will wait f
 Futures alone only provide asynchronous invocation of the command.
 What if some processing should also occur asynchronously once the command is complete.
 Support for this is provided by closures.
-Closures are blocks of code that are passed into the later() invocation method.
-In Groovy these are contained within {} immediately after a method.
+Closures are blocks of code that are passed into the `later()` invocation method.
+In Groovy these are contained within `{}` immediately after a method.
 These blocks of code are executed once the asynchronous command is complete.
 
     Hdfs.put(session).file("README").to("/tmp/example/README").later(){ println it.statusCode }
 
-In this example the put() command is executed on a separate thread and once complete the `println it.statusCode` block is executed on that thread.
+In this example the `put()` command is executed on a separate thread and once complete the `println it.statusCode` block is executed on that thread.
 The `it` variable is automatically populated by Groovy and is a reference to the result that is returned from the future or `now()` method.
 The future example above can be rewritten to illustrate the use of closures.
 
@@ -382,7 +383,7 @@ The future example above can be rewritte
     licenseFuture = Hdfs.put(session).file("LICENSE").to("/tmp/example/LICENSE").later() { println it.statusCode }
     session.waitFor( readmeFuture, licenseFuture )
 
-Again, the session.waitFor() method will wait for one or more asynchronous commands to complete.
+Again, the `session.waitFor()` method will wait for one or more asynchronous commands to complete.
 
 
 ### Constructs ###
@@ -427,9 +428,9 @@ For example in `Hdfs.rm(session).ls(dir)
 
 The invocation method controls how the request is invoked.
 Currently supported synchronous and asynchronous invocation.
-The now() method executes the request and returns the result immediately.
-The later() method submits the request to be executed later and returns a future from which the result can be retrieved.
-In addition later() invocation method can optionally be provided a closure to execute when the request is complete.
+The `now()` method executes the request and returns the result immediately.
+The `later()` method submits the request to be executed later and returns a future from which the result can be retrieved.
+In addition `later()` invocation method can optionally be provided a closure to execute when the request is complete.
 See the Futures and Closures sections below for additional detail and examples.
 
 
@@ -455,16 +456,16 @@ In the some of the examples the staticCo
 
 Groovy will invoke the getStatusCode method to retrieve the statusCode attribute.
 
-The three methods getStream(), getBytes() and getString deserve special attention.
+The three methods `getStream()`, `getBytes()` and `getString()` deserve special attention.
 Care must be taken that the HTTP body is fully read once and only once.
 Therefore one of these methods (and only one) must be called once and only once.
 Calling one of these more than once will cause an error.
 Failing to call one of these methods once will result in lingering open HTTP connections.
-The close() method may be used if the caller is not interested in reading the result body.
+The `close()` method may be used if the caller is not interested in reading the result body.
 Most commands that do not expect a response body will call close implicitly.
-If the body is retrieved via getBytes() or getString(), the close() method need not be called.
-When using getStream(), care must be taken to consume the entire body otherwise lingering open HTTP connections will result.
-The close() method may be called after reading the body partially to discard the remainder of the body.
+If the body is retrieved via `getBytes()` or `getString()`, the `close()` method need not be called.
+When using `getStream()`, care must be taken to consume the entire body otherwise lingering open HTTP connections will result.
+The `close()` method may be called after reading the body partially to discard the remainder of the body.
 
 
 ### Services ###

Modified: knox/trunk/books/1.1.0/book_gateway-details.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/book_gateway-details.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/book_gateway-details.md (original)
+++ knox/trunk/books/1.1.0/book_gateway-details.md Tue Jul  3 19:13:36 2018
@@ -20,7 +20,7 @@
 This section describes the details of the Knox Gateway itself. Including: 
 
 * How URLs are mapped between a gateway that services multiple Hadoop clusters and the clusters themselves
-* How the gateway is configured through gateway-site.xml and cluster specific topology files
+* How the gateway is configured through `gateway-site.xml` and cluster specific topology files
 * How to configure the various policy enforcement provider features such as authentication, authorization, auditing, hostmapping, etc.
 
 ### URL Mapping ###
@@ -29,15 +29,15 @@ The gateway functions much like a revers
 As such, it maintains a mapping of URLs that are exposed externally by the gateway to URLs that are provided by the Hadoop cluster.
 
 #### Default Topology URLs #####
-In order to provide compatibility with the Hadoop java client and existing CLI tools, the Knox Gateway has provided a feature called the Default Topology. This refers to a topology deployment that will be able to route URLs without the additional context that the gateway uses for differentiating from one Hadoop cluster to another. This allows the URLs to match those used by existing clients that may access webhdfs through the Hadoop file system abstraction.
+In order to provide compatibility with the Hadoop Java client and existing CLI tools, the Knox Gateway has provided a feature called the _Default Topology_. This refers to a topology deployment that will be able to route URLs without the additional context that the gateway uses for differentiating from one Hadoop cluster to another. This allows the URLs to match those used by existing clients that may access WebHDFS through the Hadoop file system abstraction.
 
-When a topology file is deployed with a file name that matches the configured default topology name, a specialized mapping for URLs is installed for that particular topology. This allows the URLs that are expected by the existing Hadoop CLIs for webhdfs to be used in interacting with the specific Hadoop cluster that is represented by the default topology file.
+When a topology file is deployed with a file name that matches the configured default topology name, a specialized mapping for URLs is installed for that particular topology. This allows the URLs that are expected by the existing Hadoop CLIs for WebHDFS to be used in interacting with the specific Hadoop cluster that is represented by the default topology file.
 
-The configuration for the default topology name is found in gateway-site.xml as a property called: "default.app.topology.name".
+The configuration for the default topology name is found in `gateway-site.xml` as a property called: `default.app.topology.name`.
 
-The default value for this property is "sandbox".
+The default value for this property is `sandbox`.
 
-Therefore, when deploying the sandbox.xml topology, both of the following example URLs work for the same underlying Hadoop cluster:
+Therefore, when deploying the `sandbox.xml` topology, both of the following example URLs work for the same underlying Hadoop cluster:
 
     https://{gateway-host}:{gateway-port}/webhdfs
     https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/webhdfs
@@ -45,7 +45,7 @@ Therefore, when deploying the sandbox.xm
 These default topology URLs exist for all of the services in the topology.
 
 #### Fully Qualified URLs #####
-Examples of mappings for the WebHDFS, WebHCat, Oozie and HBase are shown below.
+Examples of mappings for WebHDFS, WebHCat, Oozie and HBase are shown below.
 These mapping are generated from the combination of the gateway configuration file (i.e. `{GATEWAY_HOME}/conf/gateway-site.xml`) and the cluster topology descriptors (e.g. `{GATEWAY_HOME}/conf/topologies/{cluster-name}.xml`).
 The port numbers shown for the Cluster URLs represent the default ports for these services.
 The actual port number may be different for a given cluster.
@@ -72,11 +72,11 @@ The value for `{cluster-name}` is derive
 
 The value for `{webhdfs-host}`, `{webhcat-host}`, `{oozie-host}`, `{hbase-host}` and `{hive-host}` are provided via the cluster topology descriptor (e.g. `{GATEWAY_HOME}/conf/topologies/{cluster-name}.xml`).
 
-Note: The ports 50070, 50111, 11000, 8080 and 10001 are the defaults for WebHDFS, WebHCat, Oozie, HBase and Hive respectively.
+Note: The ports 50070 (9870 for Hadoop 3.x), 50111, 11000, 8080 and 10001 are the defaults for WebHDFS, WebHCat, Oozie, HBase and Hive respectively.
 Their values can also be provided via the cluster topology descriptor if your Hadoop cluster uses different ports.
 
 Note: The HBase REST API uses port 8080 by default. This often clashes with other running services.
-In the Hortonworks Sandbox, Apache Ambari might be running on this port so you might have to change it to a different port (e.g. 60080). 
+In the Hortonworks Sandbox, Apache Ambari might be running on this port, so you might have to change it to a different port (e.g. 60080). 
 
 <<book_topology_port_mapping.md>>
 <<config.md>>

Modified: knox/trunk/books/1.1.0/book_getting-started.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/book_getting-started.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/book_getting-started.md (original)
+++ knox/trunk/books/1.1.0/book_getting-started.md Tue Jul  3 19:13:36 2018
@@ -21,15 +21,12 @@ This section provides everything you nee
 
 #### Hadoop ####
 
-An existing Hadoop 2.x cluster is required for Knox to sit in front of and protect.
+An existing Hadoop 2.x or 3.x cluster is required for Knox to sit in front of and protect.
 It is possible to use a Hadoop cluster deployed on EC2 but this will require additional configuration not covered here.
 It is also possible to protect access to a services of a Hadoop cluster that is secured with Kerberos.
 This too requires additional configuration that is described in other sections of this guide.
 See #[Supported Services] for details on what is supported for this release.
 
-The Hadoop cluster should be ensured to have at least WebHDFS, WebHCat (i.e. Templeton) and Oozie configured, deployed and running.
-HBase/Stargate and Hive can also be accessed via the Knox Gateway given the proper versions and configuration.
-
 The instructions that follow assume a few things:
 
 1. The gateway is *not* collocated with the Hadoop clusters themselves.
@@ -57,7 +54,7 @@ The table below provides a brief explana
 | lib/                     | Contains the JARs for all the components that make up the gateway. |
 | dep/                     | Contains the JARs for all of the components upon which the gateway depends. |
 | ext/                     | A directory where user supplied extension JARs can be placed to extends the gateways functionality. |
-| pids/                    | Contains the process ids for running ldap and gateway servers |
+| pids/                    | Contains the process ids for running LDAP and gateway servers |
 | samples/                 | Contains a number of samples that can be used to explore the functionality of the gateway. |
 | templates/               | Contains default configuration files that can be copied and customized. |
 | README                   | Provides basic information about the Apache Knox Gateway. |
@@ -71,18 +68,18 @@ The table below provides a brief explana
 
 This table enumerates the versions of various Hadoop services that have been tested to work with the Knox Gateway.
 
-| Service              | Version    | Non-Secure  | Secure | HA |
-| -------------------- | ---------- | ----------- | ------ | ---|
-| WebHDFS              | 2.4.0      | ![y]        | ![y]   |![y]|
-| WebHCat/Templeton    | 0.13.0     | ![y]        | ![y]   |![y]|
-| Oozie                | 4.0.0      | ![y]        | ![y]   |![y]|
-| HBase                | 0.98.0     | ![y]        | ![y]   |![y]|
-| Hive (via WebHCat)   | 0.13.0     | ![y]        | ![y]   |![y]|
-| Hive (via JDBC/ODBC) | 0.13.0     | ![y]        | ![y]   |![y]|
-| Yarn ResourceManager | 2.5.0      | ![y]        | ![y]   |![n]|
-| Kafka (via REST Proxy) | 0.10.0   | ![y]        | ![y]   |![y]|
-| Storm                | 0.9.3      | ![y]        | ![n]   |![n]|
-| SOLR                | 5.5+ and 6+ | ![y]        | ![n]   |![y]|
+| Service                | Version     | Non-Secure  | Secure | HA |
+| -----------------------|-------------|-------------|--------|----|
+| WebHDFS                | 2.4.0       | ![y]        | ![y]   |![y]|
+| WebHCat/Templeton      | 0.13.0      | ![y]        | ![y]   |![y]|
+| Oozie                  | 4.0.0       | ![y]        | ![y]   |![y]|
+| HBase                  | 0.98.0      | ![y]        | ![y]   |![y]|
+| Hive (via WebHCat)     | 0.13.0      | ![y]        | ![y]   |![y]|
+| Hive (via JDBC/ODBC)   | 0.13.0      | ![y]        | ![y]   |![y]|
+| Yarn ResourceManager   | 2.5.0       | ![y]        | ![y]   |![n]|
+| Kafka (via REST Proxy) | 0.10.0      | ![y]        | ![y]   |![y]|
+| Storm                  | 0.9.3       | ![y]        | ![n]   |![n]|
+| SOLR                   | 5.5+ and 6+ | ![y]        | ![n]   |![y]|
 
 
 ### More Examples ###

Modified: knox/trunk/books/1.1.0/book_knox-samples.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/book_knox-samples.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/book_knox-samples.md (original)
+++ knox/trunk/books/1.1.0/book_knox-samples.md Tue Jul  3 19:13:36 2018
@@ -17,7 +17,7 @@
 
 ### Gateway Samples ###
 
-The purpose of the samples within the {GATEWAY_HOME}/samples directory is to demonstrate the capabilities of the Apache Knox Gateway to provide access to the numerous APIs that are available from the service components of a Hadoop cluster.
+The purpose of the samples within the `{GATEWAY_HOME}/samples` directory is to demonstrate the capabilities of the Apache Knox Gateway to provide access to the numerous APIs that are available from the service components of a Hadoop cluster.
 
 Depending on exactly how your Knox installation was done, there will be some number of steps required in order fully install and configure the samples for use.
 
@@ -39,29 +39,29 @@ There should be little to do if anything
 
 However, the following items will be worth ensuring before you start:
 
-1. The sandbox.xml topology is configured properly for the deployed services
+1. The `sandbox.xml` topology is configured properly for the deployed services
 2. That there is a LDAP server running with guest/guest-password user available in the directory
 
-#### Steps for Ambari Deployed Knox Gateway ####
+#### Steps for Ambari deployed Knox Gateway ####
 
 Apache Knox instances that are under the management of Ambari are generally assumed not to be demo instances. These instances are in place to facilitate development, testing or production Hadoop clusters.
 
 The Knox samples can however be made to work with Ambari managed Knox instances with a few steps:
 
-1. You need to have ssh access to the environment in order for the localhost assumption within the samples to be valid.
+1. You need to have SSH access to the environment in order for the localhost assumption within the samples to be valid
 2. The Knox Demo LDAP Server is started - you can start it from Ambari
-3. The default.xml topology file can be copied to sandbox.xml in order to satisfy the topology name assumption in the samples.
+3. The `default.xml` topology file can be copied to `sandbox.xml` in order to satisfy the topology name assumption in the samples
 4. Be sure to use an actual Java JRE to run the sample with something like:
 
     /usr/jdk64/jdk1.7.0_67/bin/java -jar bin/shell.jar samples/ExampleWebHdfsLs.groovy
 
-#### Steps for a Manually Installed Knox Gateway ####
+#### Steps for a manually installed Knox Gateway ####
 
 For manually installed Knox instances, there is really no way for the installer to know how to configure the topology file for you.
 
-Essentially, these steps are identical to the Ambari deployed instance except that #3 should be replaced with the configuration of the out of the box sandbox.xml to point the configuration at the proper hosts and ports.
+Essentially, these steps are identical to the Ambari deployed instance except that #3 should be replaced with the configuration of the out of the box `sandbox.xml` to point the configuration at the proper hosts and ports.
 
-1. You need to have ssh access to the environment in order for the localhost assumption within the samples to be valid.
+1. You need to have SSH access to the environment in order for the localhost assumption within the samples to be valid.
 2. The Knox Demo LDAP Server is started - you can start it from Ambari
 3. Change the hosts and ports within the `{GATEWAY_HOME}/conf/topologies/sandbox.xml` to reflect your actual cluster service locations.
 4. Be sure to use an actual Java JRE to run the sample with something like:

Modified: knox/trunk/books/1.1.0/book_limitations.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/book_limitations.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/book_limitations.md (original)
+++ knox/trunk/books/1.1.0/book_limitations.md Tue Jul  3 19:13:36 2018
@@ -23,7 +23,7 @@
 With one exception there are no known size limits for requests or responses payloads that pass through the gateway.
 The exception involves POST or PUT request payload sizes for Oozie in a Kerberos secured Hadoop cluster.
 In this one case there is currently a 4Kb payload size limit for the first request made to the Hadoop cluster.
-This is a result of how the gateway negotiates a trust relationship between itself and the cluster via SPNego.
+This is a result of how the gateway negotiates a trust relationship between itself and the cluster via SPNEGO.
 There is an undocumented configuration setting to modify this limit's value if required.
 In the future this will be made more easily configurable and at that time it will be documented.
 

Modified: knox/trunk/books/1.1.0/book_service-details.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/book_service-details.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/book_service-details.md (original)
+++ knox/trunk/books/1.1.0/book_service-details.md Tue Jul  3 19:13:36 2018
@@ -70,12 +70,12 @@ In particular this form of the cURL comm
 
     curl -i -k -u guest:guest-password ...
 
-The option -i (aka --include) is used to output HTTP response header information.
+The option `-i` (aka `--include`) is used to output HTTP response header information.
 This will be important when the content of the HTTP Location header is required for subsequent requests.
 
-The option -k (aka --insecure) is used to avoid any issues resulting from the use of demonstration SSL certificates.
+The option `-k` (aka `--insecure`) is used to avoid any issues resulting from the use of demonstration SSL certificates.
 
-The option -u (aka --user) is used to provide the credentials to be used when the client is challenged by the gateway.
+The option `-u` (aka `--user`) is used to provide the credentials to be used when the client is challenged by the gateway.
 
 Keep in mind that the samples do not use the cookie features of cURL for the sake of simplicity.
 Therefore each request via cURL will result in an authentication.
@@ -97,7 +97,7 @@ Therefore each request via cURL will res
 
 ### Service Test API
 
-The gateway supports a Service Test API that can be used to test Knox's ability to connect to each of the different Hadoop services via a simple HTTP GET request. To be able to access this API one must add the following line into the topology for which you wish to run the service test.
+The gateway supports a Service Test API that can be used to test Knox's ability to connect to each of the different Hadoop services via a simple HTTP GET request. To be able to access this API, one must add the following line into the topology for which you wish to run the service test.
 
     <service>
       <role>SERVICE-TEST</role>

Modified: knox/trunk/books/1.1.0/book_topology_port_mapping.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/book_topology_port_mapping.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/book_topology_port_mapping.md (original)
+++ knox/trunk/books/1.1.0/book_topology_port_mapping.md Tue Jul  3 19:13:36 2018
@@ -32,5 +32,4 @@ e.g.
          <description>Enable/Disable port mapping feature.</description>
      </property>
 
-<!--If a topology mapped port is in use by another topology or process then an ERROR message is logged and gateway startup continues as normal.-->
- 
\ No newline at end of file
+If a topology mapped port is in use by another topology or process then an ERROR message is logged and gateway startup continues as normal.

Modified: knox/trunk/books/1.1.0/book_troubleshooting.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/book_troubleshooting.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/book_troubleshooting.md (original)
+++ knox/trunk/books/1.1.0/book_troubleshooting.md Tue Jul  3 19:13:36 2018
@@ -167,7 +167,7 @@ The client will likely see something alo
 
 #### Using ldapsearch to verify LDAP connectivity and credentials
 
-If your authentication to Knox fails and you believe your are using correct credentials, you could try to verify the connectivity and credentials using ldapsearch, assuming you are using LDAP directory for authentication.
+If your authentication to Knox fails and you believe you're using correct credentials, you could try to verify the connectivity and credentials using ldapsearch, assuming you are using LDAP directory for authentication.
 
 Assuming you are using the default values that came out of box with Knox, your ldapsearch command would be like the following
 
@@ -254,12 +254,12 @@ user 'hdfs' can create such a directory
 
 ### Job Submission Issues - OS Accounts ###
 
-If the Hadoop cluster is not secured with Kerberos, the user submitting a job need not have an OS account on the Hadoop Nodemanagers.
+If the Hadoop cluster is not secured with Kerberos, the user submitting a job need not have an OS account on the Hadoop NodeManagers.
 
-If the Hadoop cluster is secured with Kerberos, the user submitting the job should have an OS account on Hadoop Nodemanagers.
+If the Hadoop cluster is secured with Kerberos, the user submitting the job should have an OS account on Hadoop NodeManagers.
 
 In either case if the user does not have such OS account, his file permissions are based on user ownership of files or "other" permission in "ugo" posix permission.
-The user does not get any file permission as a member of any group if you are using default hadoop.security.group.mapping.
+The user does not get any file permission as a member of any group if you are using default `hadoop.security.group.mapping`.
 
 TODO: add sample error message from running test on secure cluster with missing OS account
 

Modified: knox/trunk/books/1.1.0/book_ui_service_details.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/book_ui_service_details.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/book_ui_service_details.md (original)
+++ knox/trunk/books/1.1.0/book_ui_service_details.md Tue Jul  3 19:13:36 2018
@@ -160,7 +160,7 @@ are configured to work with an installed
     </service>
 
 The values for the host and port can be obtained from the following property in hbase-site.xml.
-Below the hostname of the hbase master is used since the bindAddress is 0.0.0.0
+Below the hostname of the HBase master is used since the bindAddress is 0.0.0.0
 
     <property>
         <name>hbase.master.info.bindAddress</name>
@@ -236,7 +236,7 @@ UI URLs is:
 
 ### Ambari UI ###
 
-Ambari UI has functionality around provisioning and managing services in a hadoop cluster. This UI can now be used 
+Ambari UI has functionality around provisioning and managing services in a Hadoop cluster. This UI can now be used 
 behind the Knox gateway.
 
 
@@ -247,7 +247,7 @@ To enable this functionality, a topology
         <url>http://<hostname>:<port></url>
     </service>
 
-The default Ambari http port is 8080. Also please note that the UI service also requires the Ambari REST API service
+The default Ambari http port is 8080. Also, please note that the UI service also requires the Ambari REST API service
  to be enabled to function properly. An example of a more complete topology is given below.
  
 
@@ -302,7 +302,7 @@ To enable this functionality, a topology
         <url>http://<hostname>:<port></url>
     </service>
 
-The default Ranger http port is 8060. Also please note that the UI service also requires the Ranger REST API service
+The default Ranger http port is 8060. Also, please note that the UI service also requires the Ranger REST API service
  to be enabled to function properly. An example of a more complete topology is given below.
  
 
@@ -361,7 +361,7 @@ To enable this functionality, a topology
         <url>http://<ATLAS_HOST>:<ATLAS_PORT></url>
     </service>
 
-The default Atlas http port is 21000. Also please note that the UI service also requires the Atlas REST API
+The default Atlas http port is 21000. Also, please note that the UI service also requires the Atlas REST API
 service to be enabled to function properly. An example of a more complete topology is given below.
 
 Atlas Rest API URL Mapping
@@ -374,7 +374,7 @@ For Atlas Rest URLs, the mapping of Knox
 
 
 
-Access Atlas Api using Curl call
+Access Atlas API using cULR call
 
      curl -i -k -L -u admin:admin -X GET \
                'https://knox-gateway:8443/gateway/{topology}/atlas/api/atlas/v2/types/typedefs?type=classification&_=1495442879421'
@@ -415,20 +415,19 @@ The URL mapping for the Atlas UI is:
                         <url>http://<ATLAS_HOST>:<ATLAS_PORT></url>
                     </service>
                 </topology>
-                                                                                                                                        Atlas
 
-Note : - This feature will allow for 'anonymous' authentication. Essentially bypassing any LDAP or other authentication done by Knox and allow the proxied service to do the actual authentication.
+Note: This feature will allow for 'anonymous' authentication. Essentially bypassing any LDAP or other authentication done by Knox and allow the proxied service to do the actual authentication.
 
 
 ### Zeppelin UI ###
-Apache Knox can be used to proxy Zeppelin UI and also supports websocket protocol used by Zeppelin. 
+Apache Knox can be used to proxy Zeppelin UI and also supports WebSocket protocol used by Zeppelin. 
 
 The URL mapping for the Zeppelin UI is:
 
 | ------- | ------------------------------------------------------------------------------------- |
 |Gateway  |  `https://{gateway-host}:{gateway-port}/{gateway-path}/{topology}/zeppelin/`
 
-By default websocket functionality is disabled, it needs to be enabed for Zeppelin UI to work properly, it can enable it by changing the `gateway.websocket.feature.enabled` property to 'true' in `<KNOX-HOME>/conf/gateway-site.xml` file, for e.g.
+By default WebSocket functionality is disabled, it needs to be enabled for Zeppelin UI to work properly, it can be enabled by changing the `gateway.websocket.feature.enabled` property to 'true' in `<KNOX-HOME>/conf/gateway-site.xml` file, for e.g.
 
     <property>
         <name>gateway.websocket.feature.enabled</name>

Modified: knox/trunk/books/1.1.0/config.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/config.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/config.md (original)
+++ knox/trunk/books/1.1.0/config.md Tue Jul  3 19:13:36 2018
@@ -111,49 +111,49 @@ Ensure that the values match the ones be
 
 The following table illustrates the configurable elements of the Apache Knox Gateway at the server level via gateway-site.xml.
 
-property    | description | default
+Property    | Description | Default
 ------------|-----------|-----------
-gateway.deployment.dir|The directory within GATEWAY_HOME that contains gateway topology deployments.|{GATEWAY_HOME}/data/deployments
-gateway.security.dir|The directory within GATEWAY_HOME that contains the required security artifacts|{GATEWAY_HOME}/data/security
-gateway.data.dir|The directory within GATEWAY_HOME that contains the gateway instance data|{GATEWAY_HOME}/data
-gateway.services.dir|The directory within GATEWAY_HOME that contains the gateway services definitions.|{GATEWAY_HOME}/services
-gateway.hadoop.conf.dir|The directory within GATEWAY_HOME that contains the gateway configuration|{GATEWAY_HOME}/conf
-gateway.frontend.url|The URL that should be used during rewriting so that it can rewrite the URLs with the correct "frontend" URL|none
-gateway.xforwarded.enabled|Indicates whether support for some X-Forwarded-* headers is enabled|true
-gateway.trust.all.certs|Indicates whether all presented client certs should establish trust|false
-gateway.client.auth.needed|Indicates whether clients are required to establish a trust relationship with client certificates|false  
-gateway.truststore.path|Location of the truststore for client certificates to be trusted|gateway.jks 
-gateway.truststore.type|Indicates the type of truststore|JKS
-gateway.keystore.type|Indicates the type of keystore for the identity store|JKS
-gateway.jdk.tls.ephemeralDHKeySize|jdk.tls.ephemeralDHKeySize, is defined to customize the ephemeral DH key sizes. The minimum acceptable DH key size is 1024 bits, except for exportable cipher suites or legacy mode (jdk.tls.ephemeralDHKeySize=legacy)|2048
-gateway.threadpool.max|The maximum concurrent requests the server will process.  The default is 254.  Connections beyond this will be queued.|254
-gateway.httpclient.maxConnections|The maximum number of connections that a single httpclient will maintain to a single host:port.  The default is 32.|32
-gateway.httpclient.connectionTimeout|The amount of time to wait when attempting a connection. The natural unit is milliseconds but a 's' or 'm' suffix may be used for seconds or minutes respectively. The default timeout is 20 sec. | 20 sec.
-gateway.httpclient.socketTimeout|The amount of time to wait for data on a socket before aborting the connection. The natural unit is milliseconds but a 's' or 'm' suffix may be used for seconds or minutes respectively. The default timeout is 20 sec. | 20 sec.
-gateway.httpserver.requestBuffer|The size of the HTTP server request buffer.  The default is 16K.|16384
-gateway.httpserver.requestHeaderBuffer|The size of the HTTP server request header buffer.  The default is 8K.|8192
-gateway.httpserver.responseBuffer|The size of the HTTP server response buffer.  The default is 32K.|32768
-gateway.httpserver.responseHeaderBuffer|The size of the HTTP server response header buffer.  The default is 8K.|8192
-gateway.websocket.feature.enabled|Enable/Disable websocket feature.|false
-gateway.gzip.compress.mime.types|Content types to be gzip compressed by Knox on the way out to browser.|text/html, text/plain, text/xml, text/css, application/javascript, text/javascript, application/x-javascript
-gateway.signing.keystore.name|OPTIONAL Filename of keystore file that contains the signing keypair. NOTE: An alias needs to be created using "knoxcli.sh create-alias" for the alias name signing.key.passphrase in order to provide the passphrase to access the keystore.|null
-gateway.signing.key.alias|OPTIONAL alias for the signing keypair within the keystore specified via gateway.signing.keystore.name.|null
-ssl.enabled|Indicates whether SSL is enabled for the Gateway|true
-ssl.include.ciphers|A comma separated list of ciphers to accept for SSL. See the [JSSE Provider docs](http://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html#SunJSSEProvider) for possible ciphers. These can also contain regular expressions as shown in the [Jetty documentation](http://www.eclipse.org/jetty/documentation/current/configuring-ssl.html).|all|
-ssl.exclude.ciphers|A comma separated list of ciphers to reject for SSL. See the [JSSE Provider docs](http://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html#SunJSSEProvider) for possible ciphers. These can also contain regular expressions as shown in the [Jetty documentation](http://www.eclipse.org/jetty/documentation/current/configuring-ssl.html).|none|
-ssl.exclude.protocols|Excludes a comma separated list of protocols to not accept for SSL or "none"|SSLv3
-gateway.remote.config.monitor.client|A reference to the [remote configuration registry client](#Remote+Configuration+Registry+Clients) the remote configuration monitor will employ.|null
-gateway.remote.config.monitor.client.allowUnauthenticatedReadAccess | When a remote registry client is configured to access a registry securely, this property can be set to allow unauthenticated clients to continue to read the content from that registry by setting the ACLs accordingly. | false
-gateway.remote.config.registry.<b>&lt;name&gt;</b>|A named [remote configuration registry client](#Remote+Configuration+Registry+Clients) definition|null
-gateway.cluster.config.monitor.ambari.enabled | Indicates whether the cluster monitoring and associated dynamic topology updating is enabled. | false
-gateway.cluster.config.monitor.ambari.interval | The interval (in seconds) at which the cluster monitor will poll Ambari for cluster configuration changes. | 60
-gateway.remote.alias.service.enabled | Turn on/off Remote Alias Discovery, this will take effect only when remote configuration monitor is enabled  | true
-gateway.read.only.override.topologies | A comma-delimited list of topology names which should be forcibly treated as read-only. | none
-gateway.discovery.default.address | The default discovery address, which is applied if no address is specified in a descriptor. | null
-gateway.discovery.default.cluster | The default discovery cluster name, which is applied if no cluster name is specified in a descriptor. | null
-gateway.dispatch.whitelist | A semicolon-delimited list of regular expressions for controlling to which endpoints Knox dispatches and redirects will be permitted. If DEFAULT is specified, or the property is omitted entirely, then a default domain-based whitelist will be derived from the Knox host. An empty value means no dispatches will be permitted. | null
-gateway.dispatch.whitelist.services | A comma-delimited list of service roles to which the *gateway.dispatch.whitelist* will be applied. | none
-gateway.strict.topology.validation | If true topology xml files will be validated against the topology schema during redeploy | false
+`gateway.deployment.dir`|The directory within `GATEWAY_HOME` that contains gateway topology deployments|`{GATEWAY_HOME}/data/deployments`
+`gateway.security.dir`|The directory within `GATEWAY_HOME` that contains the required security artifacts|`{GATEWAY_HOME}/data/security`
+`gateway.data.dir`|The directory within `GATEWAY_HOME` that contains the gateway instance data|`{GATEWAY_HOME}/data`
+`gateway.services.dir`|The directory within `GATEWAY_HOME` that contains the gateway services definitions|`{GATEWAY_HOME}/services`
+`gateway.hadoop.conf.dir`|The directory within `GATEWAY_HOME` that contains the gateway configuration|`{GATEWAY_HOME}/conf`
+`gateway.frontend.url`|The URL that should be used during rewriting so that it can rewrite the URLs with the correct "frontend" URL|none
+`gateway.xforwarded.enabled`|Indicates whether support for some X-Forwarded-* headers is enabled|`true`
+`gateway.trust.all.certs`|Indicates whether all presented client certs should establish trust|`false`
+`gateway.client.auth.needed`|Indicates whether clients are required to establish a trust relationship with client certificates|`false`  
+`gateway.truststore.path`|Location of the truststore for client certificates to be trusted|`gateway.jks` 
+`gateway.truststore.type`|Indicates the type of truststore|`JKS`
+`gateway.keystore.type`|Indicates the type of keystore for the identity store|`JKS`
+`gateway.jdk.tls.ephemeralDHKeySize`|`jdk.tls.ephemeralDHKeySize`, is defined to customize the ephemeral DH key sizes. The minimum acceptable DH key size is 1024 bits, except for exportable cipher suites or legacy mode (`jdk.tls.ephemeralDHKeySize=legacy`)|`2048`
+`gateway.threadpool.max`|The maximum concurrent requests the server will process. The default is 254. Connections beyond this will be queued.|`254`
+`gateway.httpclient.maxConnections`|The maximum number of connections that a single HttpClient will maintain to a single host:port.|`32`
+`gateway.httpclient.connectionTimeout`|The amount of time to wait when attempting a connection. The natural unit is milliseconds, but a 's' or 'm' suffix may be used for seconds or minutes respectively.|20s
+`gateway.httpclient.socketTimeout`|The amount of time to wait for data on a socket before aborting the connection. The natural unit is milliseconds, but a 's' or 'm' suffix may be used for seconds or minutes respectively.| 20s
+`gateway.httpserver.requestBuffer`|The size of the HTTP server request buffer in bytes|`16384`
+`gateway.httpserver.requestHeaderBuffer`|The size of the HTTP server request header buffer in bytes|`8192`
+`gateway.httpserver.responseBuffer`|The size of the HTTP server response buffer in bytes|`32768`
+`gateway.httpserver.responseHeaderBuffer`|The size of the HTTP server response header buffer in bytes|`8192`
+`gateway.websocket.feature.enabled`|Enable/Disable WebSocket feature|`false`
+`gateway.gzip.compress.mime.types`|Content types to be gzip compressed by Knox on the way out to browser.|text/html, text/plain, text/xml, text/css, application/javascript, text/javascript, application/x-javascript
+`gateway.signing.keystore.name`|OPTIONAL Filename of keystore file that contains the signing keypair. NOTE: An alias needs to be created using `knoxcli.sh create-alias` for the alias name `signing.key.passphrase` in order to provide the passphrase to access the keystore.|null
+`gateway.signing.key.alias`|OPTIONAL alias for the signing keypair within the keystore specified via `gateway.signing.keystore.name`|null
+`ssl.enabled`|Indicates whether SSL is enabled for the Gateway|`true`
+`ssl.include.ciphers`|A comma separated list of ciphers to accept for SSL. See the [JSSE Provider docs](http://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html#SunJSSEProvider) for possible ciphers. These can also contain regular expressions as shown in the [Jetty documentation](http://www.eclipse.org/jetty/documentation/current/configuring-ssl.html).|all
+`ssl.exclude.ciphers`|A comma separated list of ciphers to reject for SSL. See the [JSSE Provider docs](http://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html#SunJSSEProvider) for possible ciphers. These can also contain regular expressions as shown in the [Jetty documentation](http://www.eclipse.org/jetty/documentation/current/configuring-ssl.html).|none
+`ssl.exclude.protocols`|Excludes a comma separated list of protocols to not accept for SSL or "none"|`SSLv3`
+`gateway.remote.config.monitor.client`|A reference to the [remote configuration registry client](#Remote+Configuration+Registry+Clients) the remote configuration monitor will employ|null
+`gateway.remote.config.monitor.client.allowUnauthenticatedReadAccess` | When a remote registry client is configured to access a registry securely, this property can be set to allow unauthenticated clients to continue to read the content from that registry by setting the ACLs accordingly. | `false`
+`gateway.remote.config.registry.<b>&lt;name&gt;</b>`|A named [remote configuration registry client](#Remote+Configuration+Registry+Clients) definition|null
+`gateway.cluster.config.monitor.ambari.enabled`| Indicates whether the cluster monitoring and associated dynamic topology updating is enabled | `false`
+`gateway.cluster.config.monitor.ambari.interval` | The interval (in seconds) at which the cluster monitor will poll Ambari for cluster configuration changes | `60`
+`gateway.remote.alias.service.enabled` | Turn on/off Remote Alias Discovery, this will take effect only when remote configuration monitor is enabled  | `true`
+`gateway.read.only.override.topologies` | A comma-delimited list of topology names which should be forcibly treated as read-only. | none
+`gateway.discovery.default.address` | The default discovery address, which is applied if no address is specified in a descriptor. | null
+`gateway.discovery.default.cluster` | The default discovery cluster name, which is applied if no cluster name is specified in a descriptor. | null
+`gateway.dispatch.whitelist` | A semicolon-delimited list of regular expressions for controlling to which endpoints Knox dispatches and redirects will be permitted. If DEFAULT is specified, or the property is omitted entirely, then a default domain-based whitelist will be derived from the Knox host. An empty value means no dispatches will be permitted. | null
+`gateway.dispatch.whitelist.services` | A comma-delimited list of service roles to which the *gateway.dispatch.whitelist* will be applied. | none
+`gateway.strict.topology.validation` | If true topology xml files will be validated against the topology schema during redeploy | `false`
 
 #### Topology Descriptors ####
 
@@ -289,7 +289,7 @@ However the EC2 VM is unaware of this ex
     ip-10-118-99-172.ec2.internal
     ip-10-39-107-209.ec2.internal
 
-The Hostmap configuration required to allow access external to the Hadoop cluster via the Apache Knox Gateway would be this.
+The Hostmap configuration required to allow access external to the Hadoop cluster via the Apache Knox Gateway would be this:
 
     <topology>
         <gateway>
@@ -470,16 +470,16 @@ topology descriptor.
 
 *Descriptor Properties*
 
-property&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| description
+Property | Description
 ------------|-----------
-discovery-type|The discovery source type. (Currently, the only supported type is *AMBARI*).
-discovery-address|The endpoint address for the discovery source. If omitted, then Knox will check for the gateway-site configuration property named *gateway.discovery.default.address*, and use its value if defined.
-discovery-user|The username with permission to access the discovery source. If omitted, then Knox will check for an alias named *ambari.discovery.user*, and use its value if defined.
-discovery-pwd-alias|The alias of the password for the user with permission to access the discovery source. If omitted, then Knox will check for an alias named *ambari.discovery.password*, and use its value if defined.
-provider-config-ref|A reference to a provider configuration in `{GATEWAY_HOME}/conf/shared-providers/`.
-cluster|The name of the cluster from which the topology service endpoints should be determined.  If omitted, then Knox will check for the gateway-site configuration property named *gateway.discovery.default.cluster*, and use its value if defined.
-services|The collection of services to be included in the topology.
-applications|The collection of applications to be included in the topology.
+`discovery-type`|The discovery source type. (Currently, the only supported type is `AMBARI`).
+`discovery-address`|The endpoint address for the discovery source.
+`discovery-user`|The username with permission to access the discovery source. If omitted, then Knox will check for an alias named `ambari.discovery.user`, and use its value if defined.
+`discovery-pwd-alias`|The alias of the password for the user with permission to access the discovery source. If omitted, then Knox will check for an alias named `ambari.discovery.password`, and use its value if defined.
+`provider-config-ref`|A reference to a provider configuration in `{GATEWAY_HOME}/conf/shared-providers/`.
+`cluster`|The name of the cluster from which the topology service endpoints should be determined.
+`services`|The collection of services to be included in the topology.
+`applications`|The collection of applications to be included in the topology.
 
 
 Two file formats are supported for two distinct purposes.
@@ -663,9 +663,9 @@ _The actual name of the client (e.g., sa
 With this configuration, the gateway will monitor the following znodes in the specified ZooKeeper instance
 
     /knox
-	   /config
-	      /shared-providers
-		  /descriptors
+        /config
+            /shared-providers
+            /descriptors
 
 The creation of these znodes, and the population of their respective contents, is an activity __not__ currently managed by the gateway. However, the [KNOX CLI](#Knox+CLI) includes commands for managing the contents
 of these znodes.
@@ -681,28 +681,28 @@ Modifications to the contents of these z
 
 znode | action | result
 ------|--------|--------
-/knox/config/shared-providers | add | Download the new file to the local shared-providers directory
-/knox/config/shared-providers | modify | Download the new file to the local shared-providers directory; If there are any existing descriptor references, then topology will be regenerated and redeployed for those referencing descriptors.
-/knox/config/shared-providers | delete | Delete the corresponding file from the local shared-providers directory
-/knox/config/descriptors | add | Download the new file to the local descriptors directory; A corresponding topology will be generated and deployed.
-/knox/config/descriptors | modify | Download the new file to the local descriptors directory; The corresponding topology will be regenerated and redeployed.
-/knox/config/descriptors | delete | Delete the corresponding file from the local descriptors directory
+`/knox/config/shared-providers` | add | Download the new file to the local shared-providers directory
+`/knox/config/shared-providers` | modify | Download the new file to the local shared-providers directory; If there are any existing descriptor references, then topology will be regenerated and redeployed for those referencing descriptors.
+`/knox/config/shared-providers` | delete | Delete the corresponding file from the local shared-providers directory
+`/knox/config/descriptors` | add | Download the new file to the local descriptors directory; A corresponding topology will be generated and deployed.
+`/knox/config/descriptors` | modify | Download the new file to the local descriptors directory; The corresponding topology will be regenerated and redeployed.
+`/knox/config/descriptors` | delete | Delete the corresponding file from the local descriptors directory
 
 This simplifies the configuration for HA gateway deployments, in that the gateway instances can all be configured to monitor the same ZooKeeper instance, and changes to the znodes' contents will be applied to all those gateway instances. With this approach, it is no longer necessary to manually deploy topologies to each of the gateway instances.
 
 _A Note About ACLs_
 
     While the gateway does not currently require secure interactions with remote registries, it is recommended
-	that ACLs be applied to restrict at least writing of the entries referenced by this monitor. If write
-	access is available to everyone, then the contents of the configuration cannot be known to be trustworthy,
-	and there is the potential for malicious activity. Be sure to carefully consider who will have the ability
-	to define configuration in monitored remote registries, and apply the necessary measures to ensure its
-	trustworthiness.
+    that ACLs be applied to restrict at least writing of the entries referenced by this monitor. If write
+    access is available to everyone, then the contents of the configuration cannot be known to be trustworthy,
+    and there is the potential for malicious activity. Be sure to carefully consider who will have the ability
+    to define configuration in monitored remote registries and apply the necessary measures to ensure its
+    trustworthiness.
 
 
 #### Remote Configuration Registry Clients ####
 
-One or more features of the gateway employ remote configuration registry (e.g., ZooKeeper) clients. These clients are configured by setting properties in the gateway configuration (gateway-site.xml).
+One or more features of the gateway employ remote configuration registry (e.g., ZooKeeper) clients. These clients are configured by setting properties in the gateway configuration (`gateway-site.xml`).
 
 Each client configuration is a single property, the name of which is prefixed with __gateway.remote.config.registry.__ and suffixed by the client identifier.
 The value of such a property, is a registry-type-specific set of semicolon-delimited properties for that client, including the type of registry with which it will interact.
@@ -784,7 +784,7 @@ We do make some provisions in order to p
 
 It is encrypted with AES 128 bit encryption and where possible the file permissions are set to only be accessible by the user that the gateway is running as.
 
-After persisting the secret, ensure that the file at data/security/master has the appropriate permissions set for your environment.
+After persisting the secret, ensure that the file at `data/security/master` has the appropriate permissions set for your environment.
 This is probably the most important layer of defense for master secret.
 Do not assume that the encryption is sufficient protection.
 
@@ -797,7 +797,7 @@ See the Knox CLI section for description
 There are a number of artifacts that are used by the gateway in ensuring the security of wire level communications, access to protected resources and the encryption of sensitive data.
 These artifacts can be managed from outside of the gateway instances or generated and populated by the gateway instance itself.
 
-The following is a description of how this is coordinated with both standalone (development, demo, etc) gateway instances and instances as part of a cluster of gateways in mind.
+The following is a description of how this is coordinated with both standalone (development, demo, etc.) gateway instances and instances as part of a cluster of gateways in mind.
 
 Upon start of the gateway server we:
 
@@ -817,7 +817,7 @@ Upon deployment of a Hadoop cluster topo
 
 1. Look for a credential store for the topology. For instance, we have a sample topology that gets deployed out of the box.  We look for `data/security/keystores/sandbox-credentials.jceks`. This topology specific credential store is used for storing secrets/passwords that are used for encrypting sensitive data with topology specific keys.
     * If no credential store is found for the topology being deployed then one is created for it.
-      Population of the aliases is delegated to the configured providers within the system that will require the use of a  secret for a particular task.
+      Population of the aliases is delegated to the configured providers within the system that will require the use of a secret for a particular task.
       They may programmatically set the value of the secret or choose to have the value for the specified alias generated through the AliasService.
     * If a credential store is found then we ensure that it can be loaded with the provided master secret and the configured providers have the opportunity to ensure that the aliases are populated and if not to populate them.
 
@@ -835,7 +835,7 @@ In order to provide your own certificate
 ##### Importing a key pair into a Java keystore #####
 One way to accomplish this is to start with a PKCS12 store for your key pair and then convert it to a Java keystore or JKS.
 
-The following example uses openssl to create a PKCS12 encoded store from your provided certificate and private key that are in PEM format.
+The following example uses OpenSSL to create a PKCS12 encoded store from your provided certificate and private key that are in PEM format.
 
     openssl pkcs12 -export -in cert.pem -inkey key.pem > server.p12
 
@@ -849,7 +849,7 @@ While using this approach a couple of im
 
         keytool -changealias -alias "1" -destalias "gateway-identity" -keystore gateway.jks -storepass {knoxpw}
     
-2. the name of the expected identity keystore for the gateway MUST be gateway.jks
+2. the name of the expected identity keystore for the gateway MUST be `gateway.jks`
 3. the passwords for the keystore and the imported key may both be set to the master secret for the gateway install. You can change the key passphrase after import using keytool as well. You may need to do this in order to provision the password in the credential store as described later in this section. For example:
 
         keytool -keypasswd -alias gateway-identity -keystore gateway.jks
@@ -922,11 +922,11 @@ General steps:
         curl --cacert supwin12ad.cer -u hdptester:hadoop -X GET 'https://$fqdn_knox:8443/gateway/$topologyname/webhdfs/v1/tmp?op=LISTSTATUS'
 
 ##### Credential Store #####
-Whenever you provide your own keystore with either a self-signed cert or an issued certificate signed by a trusted authority, you will need to set an alias for the gateway-identity-passphrase or create an empty credential store. This is necessary for the current release in order for the system to determine the correct password for the keystore and the key.
+Whenever you provide your own keystore with either a self-signed cert or an issued certificate signed by a trusted authority, you will need to set an alias for the `gateway-identity-passphrase` or create an empty credential store. This is necessary for the current release in order for the system to determine the correct password for the keystore and the key.
 
 The credential stores in Knox use the JCEKS keystore type as it allows for the storage of general secrets in addition to certificates.
 
-Keytool may be used to create credential stores but the Knox CLI section details how to create aliases. These aliases are managed within credential stores which are created by the CLI as needed. The simplest approach is to create the gateway-identity-passpharse alias with the Knox CLI. This will create the credential store if it doesn't already exist and add the key passphrase.
+Keytool may be used to create credential stores but the Knox CLI section details how to create aliases. These aliases are managed within credential stores which are created by the CLI as needed. The simplest approach is to create the `gateway-identity-passphrase` alias with the Knox CLI. This will create the credential store if it doesn't already exist and add the key passphrase.
 
 See the Knox CLI section for descriptions of the command line utilities related to the management of the credential stores.
 
@@ -939,7 +939,7 @@ Once you have created these keystores yo
 2. All security related artifacts are protected with the master secret
 3. Secrets used by the gateway itself are stored within the gateway credential store and are the same across all gateway instances in the cluster of gateways
 4. Secrets used by providers within cluster topologies are stored in topology specific credential stores and are the same for the same topology across the cluster of gateway instances.
-   However, they are specific to the topology - so secrets for one hadoop cluster are different from those of another.
+   However, they are specific to the topology - so secrets for one Hadoop cluster are different from those of another.
    This allows for fail-over from one gateway instance to another even when encryption is being used while not allowing the compromise of one encryption key to expose the data for all clusters.
 
 NOTE: the SSL certificate will need special consideration depending on the type of certificate. Wildcard certs may be able to be shared across all gateway instances in a cluster.

Modified: knox/trunk/books/1.1.0/config_advanced_ldap.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/config_advanced_ldap.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/config_advanced_ldap.md (original)
+++ knox/trunk/books/1.1.0/config_advanced_ldap.md Tue Jul  3 19:13:36 2018
@@ -4,12 +4,12 @@ The default configuration computes the b
 This does not work in enterprises where users could belong to multiple branches of LDAP tree.
 You could instead enable advanced configuration that would compute bind DN of incoming user with an LDAP search.
 
-#### Problem with  userDnTemplate based Authentication 
+#### Problem with userDnTemplate based Authentication 
 
-UserDnTemplate based authentication uses configuration parameter ldapRealm.userDnTemplate.
-Typical value of userDNTemplate would look like `uid={0},ou=people,dc=hadoop,dc=apache,dc=org`.
+UserDnTemplate based authentication uses the configuration parameter `ldapRealm.userDnTemplate`.
+Typical values of userDNTemplate would look like `uid={0},ou=people,dc=hadoop,dc=apache,dc=org`.
  
-To compute bind DN of the client, we swap the place holder {0} with login id provided by the client.
+To compute bind DN of the client, we swap the place holder `{0}` with the login id provided by the client.
 For example, if the login id provided by the client is  "guest',  
 the computed bind DN would be `uid=guest,ou=people,dc=hadoop,dc=apache,dc=org`.
  
@@ -18,12 +18,12 @@ This keeps configuration simple.
 However, this does not work if users belong to different branches of LDAP DIT.
 For example, if there are some users under `ou=people,dc=hadoop,dc=apache,dc=org` 
 and some users under `ou=contractors,dc=hadoop,dc=apache,dc=org`,  
-we can not come up with userDnTemplate that would work for all the users.
+we cannot come up with userDnTemplate that would work for all the users.
 
 #### Using advanced LDAP Authentication
 
-With advanced LDAP authentication, we find the bind DN of the user by searching LDAP directory
-instead of interpolating bind DN from userDNTemplate. 
+With advanced LDAP authentication, we find the bind DN of the user by searching the LDAP directory
+instead of interpolating the bind DN from userDNTemplate. 
 
 
 #### Example search filter to find the client bind DN
@@ -34,15 +34,15 @@ Assuming
 * ldapRealm.userObjectClass=person
 * client specified login id = "guest"
  
-LDAP Filter for doing a search to find the bind DN would be
+The LDAP Filter for doing a search to find the bind DN would be
 
     (&(uid=guest)(objectclass=person))
 
-This could find bind DN to be 
+This could find the bind DN to be 
 
     uid=guest,ou=people,dc=hadoop,dc=apache,dc=org
 
-Please note that the userSearchAttributeName need not be part of bindDN.
+Please note that the `userSearchAttributeName` need not be part of bindDN.
 
 For example, you could use 
 
@@ -50,11 +50,11 @@ For example, you could use
 * ldapRealm.userObjectClass=person
 * client specified login id =  "bill.clinton@gmail.com"
 
-LDAP Filter for doing a search to find the bind DN would be
+The LDAP Filter for doing a search to find the bind DN would be
 
     (&(email=bill.clinton@gmail.com)(objectclass=person))
 
-This could find bind DN to be
+This could find the bind DN to be
 
     uid=billc,ou=contractors,dc=hadoop,dc=apache,dc=org
 

Modified: knox/trunk/books/1.1.0/config_audit.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/config_audit.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/config_audit.md (original)
+++ knox/trunk/books/1.1.0/config_audit.md Tue Jul  3 19:13:36 2018
@@ -45,20 +45,20 @@ For detailed information read [Apache lo
 
 Component | Description
 ---------|-----------
-EVENT_PUBLISHING_TIME|Time when audit record was published.
-ROOT_REQUEST_ID|The root request ID if this is a sub-request. Currently it is empty.
-PARENT_REQUEST_ID|The parent request ID if this is a sub-request. Currently it is empty.
-REQUEST_ID|A unique value representing the current, active request. If the current request id value is different from the current parent request id value then the current request id value is moved to the parent request id before it is replaced by the provided request id. If the root request id is not set it will be set with the first non-null value of either the parent request id or the passed request id.
-LOGGER_NAME|The name of the logger
-TARGET_SERVICE_NAME|Name of Hadoop service. Can be empty if audit record is not linked to any Hadoop service, for example, audit record for topology deployment.
-USER_NAME|Name of user that initiated session with Knox
-PROXY_USER_NAME|Mapped user name. For detailed information read #[Identity Assertion].
-SYSTEM_USER_NAME|Currently is empty.
-ACTION|Type of action that was executed. Following actions are defined: authentication, authorization, redeploy, deploy, undeploy, identity-mapping, dispatch, access.
-RESOURCE_TYPE|Type of resource for which action was executed. Following resource types are defined: uri, topology, principal.
-RESOURCE_NAME|Name of resource. For resource of type topology it is name of topology. For resource of type uri it is inbound or dispatch request path. For resource of type principal it is a name of mapped user.
-OUTCOME|Action result type. Following outcomes are defined: success, failure, unavailable.
-LOGGING_MESSAGE| Logging message. Contains additional tracking information.
+EVENT_PUBLISHING_TIME | Time when audit record was published.
+ROOT_REQUEST_ID       | The root request ID if this is a sub-request. Currently it is empty.
+PARENT_REQUEST_ID     | The parent request ID if this is a sub-request. Currently it is empty.
+REQUEST_ID            | A unique value representing the current, active request. If the current request id value is different from the current parent request id value then the current request id value is moved to the parent request id before it is replaced by the provided request id. If the root request id is not set it will be set with the first non-null value of either the parent request id or the passed request id.
+LOGGER_NAME           | The name of the logger
+TARGET_SERVICE_NAME   | Name of Hadoop service. Can be empty if audit record is not linked to any Hadoop service, for example, audit record for topology deployment.
+USER_NAME             | Name of user that initiated session with Knox
+PROXY_USER_NAME       | Mapped user name. For detailed information read #[Identity Assertion].
+SYSTEM_USER_NAME      | Currently is empty.
+ACTION                | Type of action that was executed. Following actions are defined: authentication, authorization, redeploy, deploy, undeploy, identity-mapping, dispatch, access.
+RESOURCE_TYPE         | Type of resource for which action was executed. Following resource types are defined: uri, topology, principal.
+RESOURCE_NAME         | Name of resource. For resource of type topology it is name of topology. For resource of type uri it is inbound or dispatch request path. For resource of type principal it is a name of mapped user.
+OUTCOME               | Action result type. Following outcomes are defined: success, failure, unavailable.
+LOGGING_MESSAGE       | Logging message. Contains additional tracking information.
 
 #### Audit log rotation ####
 

Modified: knox/trunk/books/1.1.0/config_authn.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/config_authn.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/config_authn.md (original)
+++ knox/trunk/books/1.1.0/config_authn.md Tue Jul  3 19:13:36 2018
@@ -28,7 +28,7 @@ The current release of Knox ships with a
 
 This section will cover the general approach to leveraging Shiro within the bundled provider including:
 
-1. General mapping of provider config to shiro.ini config
+1. General mapping of provider config to `shiro.ini` config
 2. Specific configuration for the bundled BASIC/LDAP configuration
 3. Some tips into what may need to be customized for your environment
 4. How to setup the use of LDAP over SSL or LDAPS
@@ -49,9 +49,9 @@ The following example shows the format o
         </param>
     </provider>
 
-Conversely, the Shiro provider currently expects a shiro.ini file in the web-inf directory of the cluster specific web application.
+Conversely, the Shiro provider currently expects a `shiro.ini` file in the `WEB-INF` directory of the cluster specific web application.
 
-The following example illustrates a configuration of the bundled BASIC/LDAP authentication config in a shiro.ini file:
+The following example illustrates a configuration of the bundled BASIC/LDAP authentication config in a `shiro.ini` file:
 
     [urls]
     /**=authcBasic
@@ -61,7 +61,7 @@ The following example illustrates a conf
     ldapRealm.contextFactory.url=ldap://localhost:33389
     ldapRealm.userDnTemplate=uid={0},ou=people,dc=hadoop,dc=apache,dc=org
 
-In order to fit into the context of an INI file format, at deployment time we interrogate the parameters provided in the provider configuration and parse the INI section out of the parameter names. The following provider config illustrates this approach. Notice that the section names in the above shiro.ini match the beginning of the param names that are in the following config:
+In order to fit into the context of an INI file format, at deployment time we interrogate the parameters provided in the provider configuration and parse the INI section out of the parameter names. The following provider config illustrates this approach. Notice that the section names in the above shiro.ini match the beginning of the parameter names that are in the following config:
 
     <gateway>
         <provider>
@@ -98,9 +98,9 @@ This section discusses the LDAP configur
 
 **main.ldapRealm** - this element indicates the fully qualified class name of the Shiro realm to be used in authenticating the user. The class name provided by default in the sample is the `org.apache.shiro.realm.ldap.JndiLdapRealm` this implementation provides us with the ability to authenticate but by default has authorization disabled. In order to provide authorization - which is seen by Shiro as dependent on an LDAP schema that is specific to each organization - an extension of JndiLdapRealm is generally used to override and implement the doGetAuthorizationInfo method. In this particular release we are providing a simple authorization provider that can be used along with the Shiro authentication provider.
 
-**main.ldapRealm.userDnTemplate** - in order to bind a simple username to an LDAP server that generally requires a full distinguished name (DN), we must provide the template into which the simple username will be inserted. This template allows for the creation of a DN by injecting the simple username into the common name (CN) portion of the DN. **This element will need to be customized to reflect your deployment environment.** The template provided in the sample is only an example and is valid only within the LDAP schema distributed with Knox and is represented by the users.ldif file in the `{GATEWAY_HOME}/conf` directory.
+**main.ldapRealm.userDnTemplate** - in order to bind a simple username to an LDAP server that generally requires a full distinguished name (DN), we must provide the template into which the simple username will be inserted. This template allows for the creation of a DN by injecting the simple username into the common name (CN) portion of the DN. **This element will need to be customized to reflect your deployment environment.** The template provided in the sample is only an example and is valid only within the LDAP schema distributed with Knox and is represented by the `users.ldif` file in the `{GATEWAY_HOME}/conf` directory.
 
-**main.ldapRealm.contextFactory.url** - this element is the URL that represents the host and port of LDAP server. It also includes the scheme of the protocol to use. This may be either ldap or ldaps depending on whether you are communicating with the LDAP over SSL (highly recommended). **This element will need to be customized to reflect your deployment environment.**.
+**main.ldapRealm.contextFactory.url** - this element is the URL that represents the host and port of the LDAP server. It also includes the scheme of the protocol to use. This may be either `ldap` or `ldaps` depending on whether you are communicating with the LDAP over SSL (highly recommended). **This element will need to be customized to reflect your deployment environment.**.
 
 **main.ldapRealm.contextFactory.authenticationMechanism** - this element indicates the type of authentication that should be performed against the LDAP server. The current default value is `simple` which indicates a simple bind operation. This element should not need to be modified and no mechanism other than a simple bind has been tested for this particular release.
 
@@ -112,15 +112,15 @@ You would use LDAP configuration as docu
 
 Some Active Directory specific things to keep in mind:
 
-Typical AD main.ldapRealm.userDnTemplate value looks slightly different, such as
+Typical AD `main.ldapRealm.userDnTemplate` value looks slightly different, such as
 
     cn={0},cn=users,DC=lab,DC=sample,dc=com
 
-Please compare this with a typical Apache DS main.ldapRealm.userDnTemplate value and make note of the difference:
+Please compare this with a typical Apache DS `main.ldapRealm.userDnTemplate` value and make note of the difference:
 
     `uid={0},ou=people,dc=hadoop,dc=apache,dc=org`
 
-If your AD is configured to authenticate based on just the cn and password and does not require user DN, you do not have to specify value for  main.ldapRealm.userDnTemplate.
+If your AD is configured to authenticate based on just the cn and password and does not require user DN, you do not have to specify value for `main.ldapRealm.userDnTemplate`.
 
 
 #### LDAP over SSL (LDAPS) Configuration ####
@@ -128,7 +128,7 @@ In order to communicate with your LDAP s
 
 1. **main.ldapRealm.contextFactory.url** must be changed to have the `ldaps` protocol scheme and the port must be the SSL listener port on your LDAP server.
 2. Identity certificate (keypair) provisioned to LDAP server - your LDAP server specific documentation should indicate what is required for providing a cert or keypair to represent the LDAP server identity to connecting clients.
-3. Trusting the LDAP Server's public key - if the LDAP Server's identity certificate is issued by a well known and trusted certificate authority and is already represented in the JRE's cacerts truststore then you don't need to do anything for trusting the LDAP server's cert. If, however, the cert is selfsigned or issued by an untrusted authority you will need to either add it to the cacerts keystore or to another truststore that you may direct Knox to utilize through a system property.
+3. Trusting the LDAP Server's public key - if the LDAP Server's identity certificate is issued by a well known and trusted certificate authority and is already represented in the JRE's cacerts truststore then you don't need to do anything for trusting the LDAP server's cert. If, however, the cert is self-signed or issued by an untrusted authority you will need to either add it to the cacerts keystore or to another truststore that you may direct Knox to utilize through a system property.
 
 #### Session Configuration ####
 

Modified: knox/trunk/books/1.1.0/config_authz.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/config_authz.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/config_authz.md (original)
+++ knox/trunk/books/1.1.0/config_authz.md Tue Jul  3 19:13:36 2018
@@ -23,7 +23,7 @@ The Knox Gateway has an out-of-the-box a
 
 This provider utilizes a simple and familiar pattern of using ACLs to protect Hadoop resources by specifying users, groups and ip addresses that are permitted access.
 
-Note : This feature will not work as expected if 'anonymous' authentication is used. 
+Note: This feature will not work as expected if 'anonymous' authentication is used. 
 
 #### Configuration ####
 
@@ -44,7 +44,7 @@ The above configuration enables the auth
     
 where `{serviceName}` would need to be the name of a configured Hadoop service within the topology.
 
-NOTE: ipaddr is unique among the parts of the ACL in that you are able to specify a wildcard within an ipaddr to indicate that the remote address must being with the String prior to the asterisk within the ipaddr acl. For instance:
+NOTE: ipaddr is unique among the parts of the ACL in that you are able to specify a wildcard within an ipaddr to indicate that the remote address must being with the String prior to the asterisk within the ipaddr ACL. For instance:
 
     <param>
         <name>{serviceName}.acl</name>
@@ -61,7 +61,7 @@ Note also that configuration without any
     </param>
 
 meaning: all users, groups and IPs have access.
-Each of the elements of the acl param support multiple values via comma separated list and the `*` wildcard to match any.
+Each of the elements of the ACL parameter support multiple values via comma separated list and the `*` wildcard to match any.
 
 For instance:
 
@@ -212,7 +212,7 @@ Note: In the examples below `{serviceNam
 
 The principal mapping aspect of the identity assertion provider is important to understand in order to fully utilize the authorization features of this provider.
 
-This feature allows us to map the authenticated principal to a runas or impersonated principal to be asserted to the Hadoop services in the backend. It is fully documented in the Identity Assertion section of this guide.
+This feature allows us to map the authenticated principal to a runAs or impersonated principal to be asserted to the Hadoop services in the backend. It is fully documented in the Identity Assertion section of this guide.
 
 These additional mapping capabilities are used together with the authorization ACL policy.
 An example of a full topology that illustrates these together is below.

Modified: knox/trunk/books/1.1.0/config_ha.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/config_ha.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/config_ha.md (original)
+++ knox/trunk/books/1.1.0/config_ha.md Tue Jul  3 19:13:36 2018
@@ -103,7 +103,7 @@ See this document for setting up Apache
 
 See this document for an example: http://www.akadia.com/services/ssh_test_certificate.html
 
-By convention, Apache HTTP Server and Knox certificates are put into /etc/apache2/ssl/ folder.
+By convention, Apache HTTP Server and Knox certificates are put into the `/etc/apache2/ssl/` folder.
 
 ###### Update Apache HTTP Server configuration file ######