You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@knox.apache.org by km...@apache.org on 2015/10/26 16:54:15 UTC

svn commit: r1710635 [6/11] - in /knox: site/ site/books/knox-0-3-0/ site/books/knox-0-4-0/ site/books/knox-0-5-0/ site/books/knox-0-6-0/ site/books/knox-0-7-0/ site/images/ trunk/markbook/src/main/java/org/apache/hadoop/gateway/markbook/ trunk/markboo...

Modified: knox/site/books/knox-0-5-0/knox-0-5-0.html
URL: http://svn.apache.org/viewvc/knox/site/books/knox-0-5-0/knox-0-5-0.html?rev=1710635&r1=1710634&r2=1710635&view=diff
==============================================================================
--- knox/site/books/knox-0-5-0/knox-0-5-0.html (original)
+++ knox/site/books/knox-0-5-0/knox-0-5-0.html Mon Oct 26 15:54:14 2015
@@ -13,7 +13,7 @@
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and
    limitations under the License.
---><p><link href="book.css" rel="stylesheet"/></p><p><img src="knox-logo.gif" alt="Knox"/> <!-- <img src="apache-logo.gif" alt="Apache"/> --> <img src="apache-logo.gif" align="right" alt="Apache"/></p><h1><a id="Apache+Knox+Gateway+0.5.x+User's+Guide"></a>Apache Knox Gateway 0.5.x User&rsquo;s Guide</h1><h2><a id="Table+Of+Contents"></a>Table Of Contents</h2>
+--><p><link href="book.css" rel="stylesheet"/></p><p><img src="knox-logo.gif" alt="Knox"/> <!-- <img src="apache-logo.gif" alt="Apache"/> --> <img src="apache-logo.gif" align="right" alt="Apache"/></p><h1><a id="Apache+Knox+Gateway+0.5.x+User's+Guide">Apache Knox Gateway 0.5.x User&rsquo;s Guide</a> <a href="#Apache+Knox+Gateway+0.5.x+User's+Guide"><img src="markbook-section-link.png"/></a></h1><h2><a id="Table+Of+Contents">Table Of Contents</a> <a href="#Table+Of+Contents"><img src="markbook-section-link.png"/></a></h2>
 <ul>
   <li><a href="#Introduction">Introduction</a></li>
   <li><a href="#Quick+Start">Quick Start</a></li>
@@ -53,7 +53,7 @@
   <li><a href="#Limitations">Limitations</a></li>
   <li><a href="#Troubleshooting">Troubleshooting</a></li>
   <li><a href="#Export+Controls">Export Controls</a></li>
-</ul><h2><a id="Introduction"></a>Introduction</h2><p>The Apache Knox Gateway is a system that provides a single point of authentication and access for Apache Hadoop services in a cluster. The goal is to simplify Hadoop security for both users (i.e. who access the cluster data and execute jobs) and operators (i.e. who control access and manage the cluster). The gateway runs as a server (or cluster of servers) that provide centralized access to one or more Hadoop clusters. In general the goals of the gateway are as follows:</p>
+</ul><h2><a id="Introduction">Introduction</a> <a href="#Introduction"><img src="markbook-section-link.png"/></a></h2><p>The Apache Knox Gateway is a system that provides a single point of authentication and access for Apache Hadoop services in a cluster. The goal is to simplify Hadoop security for both users (i.e. who access the cluster data and execute jobs) and operators (i.e. who control access and manage the cluster). The gateway runs as a server (or cluster of servers) that provide centralized access to one or more Hadoop clusters. In general the goals of the gateway are as follows:</p>
 <ul>
   <li>Provide perimeter security for Hadoop REST APIs to make Hadoop security easier to setup and use
   <ul>
@@ -66,7 +66,7 @@
     <li>Limit the network endpoints (and therefore firewall holes) required to access a Hadoop cluster</li>
     <li>Hide the internal Hadoop cluster topology from potential attackers</li>
   </ul></li>
-</ul><h2><a id="Quick+Start"></a>Quick Start</h2><p>Here are the steps to have Apache Knox up and running against a Hadoop Cluster:</p>
+</ul><h2><a id="Quick+Start">Quick Start</a> <a href="#Quick+Start"><img src="markbook-section-link.png"/></a></h2><p>Here are the steps to have Apache Knox up and running against a Hadoop Cluster:</p>
 <ol>
   <li>Verify system requirements</li>
   <li>Download a virtual machine (VM) with Hadoop</li>
@@ -76,13 +76,13 @@
   <li>Start the LDAP embedded within Knox</li>
   <li>Start the Knox Gateway</li>
   <li>Do Hadoop with Knox</li>
-</ol><h3><a id="1+-+Requirements"></a>1 - Requirements</h3><h4><a id="Java"></a>Java</h4><p>Java 1.6 or later is required for the Knox Gateway runtime. Use the command below to check the version of Java installed on the system where Knox will be running.</p>
+</ol><h3><a id="1+-+Requirements">1 - Requirements</a> <a href="#1+-+Requirements"><img src="markbook-section-link.png"/></a></h3><h4><a id="Java">Java</a> <a href="#Java"><img src="markbook-section-link.png"/></a></h4><p>Java 1.6 or later is required for the Knox Gateway runtime. Use the command below to check the version of Java installed on the system where Knox will be running.</p>
 <pre><code>java -version
-</code></pre><h4><a id="Hadoop"></a>Hadoop</h4><p>Knox 0.5.0 supports Hadoop 2.x, the quick start instructions assume a Hadoop 2.x virtual machine based environment. </p><h3><a id="2+-+Download+Hadoop+2.x+VM"></a>2 - Download Hadoop 2.x VM</h3><p>The quick start provides a link to download Hadoop 2.0 based Hortonworks virtual machine <a href="http://hortonworks.com/products/hdp-2/#install">Sandbox</a>. Please note Knox supports other Hadoop distributions and is configurable against a full blown Hadoop cluster. Configuring Knox for Hadoop 2.x version, or Hadoop deployed in EC2 or a custom Hadoop cluster is documented in advance deployment guide.</p><h3><a id="3+-+Download+Apache+Knox+Gateway"></a>3 - Download Apache Knox Gateway</h3><p>Download one of the distributions below from the <a href="http://www.apache.org/dyn/closer.cgi/knox">Apache mirrors</a>.</p>
+</code></pre><h4><a id="Hadoop">Hadoop</a> <a href="#Hadoop"><img src="markbook-section-link.png"/></a></h4><p>Knox 0.5.0 supports Hadoop 2.x, the quick start instructions assume a Hadoop 2.x virtual machine based environment. </p><h3><a id="2+-+Download+Hadoop+2.x+VM">2 - Download Hadoop 2.x VM</a> <a href="#2+-+Download+Hadoop+2.x+VM"><img src="markbook-section-link.png"/></a></h3><p>The quick start provides a link to download Hadoop 2.0 based Hortonworks virtual machine <a href="http://hortonworks.com/products/hdp-2/#install">Sandbox</a>. Please note Knox supports other Hadoop distributions and is configurable against a full blown Hadoop cluster. Configuring Knox for Hadoop 2.x version, or Hadoop deployed in EC2 or a custom Hadoop cluster is documented in advance deployment guide.</p><h3><a id="3+-+Download+Apache+Knox+Gateway">3 - Download Apache Knox Gateway</a> <a href="#3+-+Download+Apache+Knox+Gateway"><img src="markbook-section-link.png"/></a></h3><p>Download one of the dis
 tributions below from the <a href="http://www.apache.org/dyn/closer.cgi/knox">Apache mirrors</a>.</p>
 <ul>
   <li>Source archive: <a href="http://www.apache.org/dyn/closer.cgi/knox/0.5.0/knox-0.5.0-src.zip">knox-0.5.0-src.zip</a> (<a href="http://www.apache.org/dyn/closer.cgi/knox/0.5.0/knox-0.5.0-src.zip.asc">PGP signature</a>, <a href="http://www.apache.org/dyn/closer.cgi/knox/0.5.0/knox-0.5.0-src.zip.sha">SHA1 digest</a>, <a href="http://www.apache.org/dyn/closer.cgi/knox/0.5.0/knox-0.5.0-src.zip.md5">MD5 digest</a>)</li>
   <li>Binary archive: <a href="http://www.apache.org/dyn/closer.cgi/knox/0.5.0/knox-0.5.0.zip">knox-0.5.0.zip</a> (<a href="http://www.apache.org/dyn/closer.cgi/knox/0.5.0/knox-0.5.0.zip.asc">PGP signature</a>, <a href="http://www.apache.org/dyn/closer.cgi/knox/0.5.0/knox-0.5.0.zip.sha">SHA1 digest</a>, <a href="http://www.apache.org/dyn/closer.cgi/knox/0.5.0/knox-0.5.0.zip.md5">MD5 digest</a>)</li>
-</ul><p>Apache Knox Gateway releases are available under the <a href="http://www.apache.org/licenses/LICENSE-2.0">Apache License, Version 2.0</a>. See the NOTICE file contained in each release artifact for applicable copyright attribution notices.</p><h3><a id="Verify"></a>Verify</h3><p>While recommended, verify is an optional step. You can verify the integrity of any downloaded files using the PGP signatures. Please read <a href="http://httpd.apache.org/dev/verification.html">Verifying Apache HTTP Server Releases</a> for more information on why you should verify our releases.</p><p>The PGP signatures can be verified using PGP or GPG. First download the <a href="https://dist.apache.org/repos/dist/release/knox/KEYS">KEYS</a> file as well as the .asc signature files for the relevant release packages. Make sure you get these files from the main distribution directory linked above, rather than from a mirror. Then verify the signatures using one of the methods below.</p>
+</ul><p>Apache Knox Gateway releases are available under the <a href="http://www.apache.org/licenses/LICENSE-2.0">Apache License, Version 2.0</a>. See the NOTICE file contained in each release artifact for applicable copyright attribution notices.</p><h3><a id="Verify">Verify</a> <a href="#Verify"><img src="markbook-section-link.png"/></a></h3><p>While recommended, verify is an optional step. You can verify the integrity of any downloaded files using the PGP signatures. Please read <a href="http://httpd.apache.org/dev/verification.html">Verifying Apache HTTP Server Releases</a> for more information on why you should verify our releases.</p><p>The PGP signatures can be verified using PGP or GPG. First download the <a href="https://dist.apache.org/repos/dist/release/knox/KEYS">KEYS</a> file as well as the .asc signature files for the relevant release packages. Make sure you get these files from the main distribution directory linked above, rather than from a mirror. Then verify the si
 gnatures using one of the methods below.</p>
 <pre><code>% pgpk -a KEYS
 % pgpv knox-0.5.0.zip.asc
 </code></pre><p>or</p>
@@ -91,12 +91,12 @@
 </code></pre><p>or</p>
 <pre><code>% gpg --import KEYS
 % gpg --verify knox-0.5.0.zip.asc
-</code></pre><h3><a id="4+-+Start+Hadoop+virtual+machine"></a>4 - Start Hadoop virtual machine</h3><p>Start the Hadoop virtual machine.</p><h3><a id="5+-+Install+Knox"></a>5 - Install Knox</h3><p>The steps required to install the gateway will vary depending upon which distribution format (zip | rpm) was downloaded. In either case you will end up with a directory where the gateway is installed. This directory will be referred to as your <code>{GATEWAY_HOME}</code> throughout this document.</p><h4><a id="ZIP"></a>ZIP</h4><p>If you downloaded the Zip distribution you can simply extract the contents into a directory. The example below provides a command that can be executed to do this. Note the <code>{VERSION}</code> portion of the command must be replaced with an actual Apache Knox Gateway version number. This might be 0.4.0 for example and must patch the value in the file downloaded.</p>
+</code></pre><h3><a id="4+-+Start+Hadoop+virtual+machine">4 - Start Hadoop virtual machine</a> <a href="#4+-+Start+Hadoop+virtual+machine"><img src="markbook-section-link.png"/></a></h3><p>Start the Hadoop virtual machine.</p><h3><a id="5+-+Install+Knox">5 - Install Knox</a> <a href="#5+-+Install+Knox"><img src="markbook-section-link.png"/></a></h3><p>The steps required to install the gateway will vary depending upon which distribution format (zip | rpm) was downloaded. In either case you will end up with a directory where the gateway is installed. This directory will be referred to as your <code>{GATEWAY_HOME}</code> throughout this document.</p><h4><a id="ZIP">ZIP</a> <a href="#ZIP"><img src="markbook-section-link.png"/></a></h4><p>If you downloaded the Zip distribution you can simply extract the contents into a directory. The example below provides a command that can be executed to do this. Note the <code>{VERSION}</code> portion of the command must be replaced with an actual Apa
 che Knox Gateway version number. This might be 0.4.0 for example and must patch the value in the file downloaded.</p>
 <pre><code>jar xf knox-{VERSION}.zip
-</code></pre><p>This will create a directory <code>knox-{VERSION}</code> in your current directory. The directory <code>knox-{VERSION}</code> will considered your <code>{GATEWAY_HOME}</code></p><h3><a id="6+-+Start+LDAP+embedded+in+Knox"></a>6 - Start LDAP embedded in Knox</h3><p>Knox comes with an LDAP server for demonstration purposes.</p>
+</code></pre><p>This will create a directory <code>knox-{VERSION}</code> in your current directory. The directory <code>knox-{VERSION}</code> will considered your <code>{GATEWAY_HOME}</code></p><h3><a id="6+-+Start+LDAP+embedded+in+Knox">6 - Start LDAP embedded in Knox</a> <a href="#6+-+Start+LDAP+embedded+in+Knox"><img src="markbook-section-link.png"/></a></h3><p>Knox comes with an LDAP server for demonstration purposes.</p>
 <pre><code>cd {GATEWAY_HOME}
 bin/ldap.sh start
-</code></pre><h3><a id="7+-+Start+Knox"></a>7 - Start Knox</h3><p>The gateway can be started using the provided shell script.</p><h6><a id="Starting+via+script"></a>Starting via script</h6><p>Run the knoxcli create-master command in order to persist the master secret that is used to protect the key and credential stores for the gateway instance.</p><h6><a id="linux"></a>linux</h6>
+</code></pre><h3><a id="7+-+Start+Knox">7 - Start Knox</a> <a href="#7+-+Start+Knox"><img src="markbook-section-link.png"/></a></h3><p>The gateway can be started using the provided shell script.</p><h6><a id="Starting+via+script">Starting via script</a> <a href="#Starting+via+script"><img src="markbook-section-link.png"/></a></h6><p>Run the knoxcli create-master command in order to persist the master secret that is used to protect the key and credential stores for the gateway instance.</p><h6><a id="linux">linux</a> <a href="#linux"><img src="markbook-section-link.png"/></a></h6>
 <pre><code>cd {GATEWAY_HOME}
 bin/knoxcli.sh create-master
 </code></pre><p>The cli will prompt you for the master secret (i.e. password).</p><p>The server will discover the persisted master secret during start up and complete the setup process for demo installs. A demo install will consist of a knox gateway instance with an identity certificate for localhost. This will require clients to be on the same machine or to turn off hostname verification. For more involved deployments, See the Knox CLI section of this document for additional commands - incuding the ability to create a self-signed certificate for a specific hostname.</p>
@@ -108,7 +108,7 @@ bin/gateway.sh stop
 </code></pre><p>If for some reason the gateway is stopped other than by using the command above you may need to clear the tracking PID.</p>
 <pre><code>cd {GATEWAY_HOME}
 bin/gateway.sh clean
-</code></pre><p><strong>NOTE: This command will also clear any .out and .err file from the /var/log/knox directory so use this with caution.</strong></p><h3><a id="8+-+Do+Hadoop+with+Knox"></a>8 - Do Hadoop with Knox</h3><h4><a id="Put+a+file+in+HDFS+via+Knox."></a>Put a file in HDFS via Knox.</h4><h4><a id="CAT+a+file+in+HDFS+via+Knox."></a>CAT a file in HDFS via Knox.</h4><h4><a id="Invoke+the+LISTSTATUS+operation+on+WebHDFS+via+the+gateway."></a>Invoke the LISTSTATUS operation on WebHDFS via the gateway.</h4><p>This will return a directory listing of the root (i.e. /) directory of HDFS.</p>
+</code></pre><p><strong>NOTE: This command will also clear any .out and .err file from the /var/log/knox directory so use this with caution.</strong></p><h3><a id="8+-+Do+Hadoop+with+Knox">8 - Do Hadoop with Knox</a> <a href="#8+-+Do+Hadoop+with+Knox"><img src="markbook-section-link.png"/></a></h3><h4><a id="Put+a+file+in+HDFS+via+Knox.">Put a file in HDFS via Knox.</a> <a href="#Put+a+file+in+HDFS+via+Knox."><img src="markbook-section-link.png"/></a></h4><h4><a id="CAT+a+file+in+HDFS+via+Knox.">CAT a file in HDFS via Knox.</a> <a href="#CAT+a+file+in+HDFS+via+Knox."><img src="markbook-section-link.png"/></a></h4><h4><a id="Invoke+the+LISTSTATUS+operation+on+WebHDFS+via+the+gateway.">Invoke the LISTSTATUS operation on WebHDFS via the gateway.</a> <a href="#Invoke+the+LISTSTATUS+operation+on+WebHDFS+via+the+gateway."><img src="markbook-section-link.png"/></a></h4><p>This will return a directory listing of the root (i.e. /) directory of HDFS.</p>
 <pre><code>curl -i -k -u guest:guest-password -X GET \
     &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/?op=LISTSTATUS&#39;
 </code></pre><p>The results of the above command should result in something to along the lines of the output below. The exact information returned is subject to the content within HDFS in your Hadoop cluster. Successfully executing this command at a minimum proves that the gateway is properly configured to provide access to WebHDFS. It does not necessarily provide that any of the other services are correct configured to be accessible. To validate that see the sections for the individual services in <a href="#Service+Details">Service Details</a>.</p>
@@ -123,11 +123,11 @@ Server: Jetty(6.1.26)
 {&quot;accessTime&quot;:0,&quot;blockSize&quot;:0,&quot;group&quot;:&quot;hdfs&quot;,&quot;length&quot;:0,&quot;modificationTime&quot;:1350596040075,&quot;owner&quot;:&quot;hdfs&quot;,&quot;pathSuffix&quot;:&quot;tmp&quot;,&quot;permission&quot;:&quot;777&quot;,&quot;replication&quot;:0,&quot;type&quot;:&quot;DIRECTORY&quot;},
 {&quot;accessTime&quot;:0,&quot;blockSize&quot;:0,&quot;group&quot;:&quot;hdfs&quot;,&quot;length&quot;:0,&quot;modificationTime&quot;:1350595857178,&quot;owner&quot;:&quot;hdfs&quot;,&quot;pathSuffix&quot;:&quot;user&quot;,&quot;permission&quot;:&quot;755&quot;,&quot;replication&quot;:0,&quot;type&quot;:&quot;DIRECTORY&quot;}
 ]}}
-</code></pre><h4><a id="Submit+a+MR+job+via+Knox."></a>Submit a MR job via Knox.</h4><h4><a id="Get+status+of+a+MR+job+via+Knox."></a>Get status of a MR job via Knox.</h4><h4><a id="Cancel+a+MR+job+via+Knox."></a>Cancel a MR job via Knox.</h4><h3><a id="More+Examples"></a>More Examples</h3><h2><a id="Apache+Knox+Details"></a>Apache Knox Details</h2><p>This section provides everything you need to know to get the Knox gateway up and running against a Hadoop cluster.</p><h4><a id="Hadoop"></a>Hadoop</h4><p>An existing Hadoop 2.x cluster is required for Knox 0.5.0 to sit in front of and protect. It is possible to use a Hadoop cluster deployed on EC2 but this will require additional configuration not covered here. It is also possible to protect access to a services of a Hadoop cluster that is secured with kerberos. This too requires additional configuration that is described in other sections of this guide. See <a href="#Supported+Services">Supported Services</a> for details on what is s
 upported for this release.</p><p>The Hadoop cluster should be ensured to have at least WebHDFS, WebHCat (i.e. Templeton) and Oozie configured, deployed and running. HBase/Stargate and Hive can also be accessed via the Knox Gateway given the proper versions and configuration.</p><p>The instructions that follow assume a few things:</p>
+</code></pre><h4><a id="Submit+a+MR+job+via+Knox.">Submit a MR job via Knox.</a> <a href="#Submit+a+MR+job+via+Knox."><img src="markbook-section-link.png"/></a></h4><h4><a id="Get+status+of+a+MR+job+via+Knox.">Get status of a MR job via Knox.</a> <a href="#Get+status+of+a+MR+job+via+Knox."><img src="markbook-section-link.png"/></a></h4><h4><a id="Cancel+a+MR+job+via+Knox.">Cancel a MR job via Knox.</a> <a href="#Cancel+a+MR+job+via+Knox."><img src="markbook-section-link.png"/></a></h4><h3><a id="More+Examples">More Examples</a> <a href="#More+Examples"><img src="markbook-section-link.png"/></a></h3><h2><a id="Apache+Knox+Details">Apache Knox Details</a> <a href="#Apache+Knox+Details"><img src="markbook-section-link.png"/></a></h2><p>This section provides everything you need to know to get the Knox gateway up and running against a Hadoop cluster.</p><h4><a id="Hadoop">Hadoop</a> <a href="#Hadoop"><img src="markbook-section-link.png"/></a></h4><p>An existing Hadoop 2.x cluster is requ
 ired for Knox 0.5.0 to sit in front of and protect. It is possible to use a Hadoop cluster deployed on EC2 but this will require additional configuration not covered here. It is also possible to protect access to a services of a Hadoop cluster that is secured with kerberos. This too requires additional configuration that is described in other sections of this guide. See <a href="#Supported+Services">Supported Services</a> for details on what is supported for this release.</p><p>The Hadoop cluster should be ensured to have at least WebHDFS, WebHCat (i.e. Templeton) and Oozie configured, deployed and running. HBase/Stargate and Hive can also be accessed via the Knox Gateway given the proper versions and configuration.</p><p>The instructions that follow assume a few things:</p>
 <ol>
   <li>The gateway is <em>not</em> collocated with the Hadoop clusters themselves.</li>
   <li>The host names and IP addresses of the cluster services are accessible by the gateway where ever it happens to be running.</li>
-</ol><p>All of the instructions and samples provided here are tailored and tested to work &ldquo;out of the box&rdquo; against a <a href="http://hortonworks.com/products/hortonworks-sandbox">Hortonworks Sandbox 2.x VM</a>.</p><h4><a id="Apache+Knox+Directory+Layout"></a>Apache Knox Directory Layout</h4><p>Knox can be installed by expanding the zip/archive file.</p><p>The table below provides a brief explanation of the important files and directories within <code>{GATEWWAY_HOME}</code></p>
+</ol><p>All of the instructions and samples provided here are tailored and tested to work &ldquo;out of the box&rdquo; against a <a href="http://hortonworks.com/products/hortonworks-sandbox">Hortonworks Sandbox 2.x VM</a>.</p><h4><a id="Apache+Knox+Directory+Layout">Apache Knox Directory Layout</a> <a href="#Apache+Knox+Directory+Layout"><img src="markbook-section-link.png"/></a></h4><p>Knox can be installed by expanding the zip/archive file.</p><p>The table below provides a brief explanation of the important files and directories within <code>{GATEWWAY_HOME}</code></p>
 <table>
   <thead>
     <tr>
@@ -209,7 +209,7 @@ Server: Jetty(6.1.26)
       <td>Documents required attribution notices for included dependencies. </td>
     </tr>
   </tbody>
-</table><h3><a id="Supported+Services"></a>Supported Services</h3><p>This table enumerates the versions of various Hadoop services that have been tested to work with the Knox Gateway.</p>
+</table><h3><a id="Supported+Services">Supported Services</a> <a href="#Supported+Services"><img src="markbook-section-link.png"/></a></h3><p>This table enumerates the versions of various Hadoop services that have been tested to work with the Knox Gateway.</p>
 <table>
   <thead>
     <tr>
@@ -269,7 +269,7 @@ Server: Jetty(6.1.26)
       <td><img src="check.png"  alt="y"/> </td>
     </tr>
   </tbody>
-</table><h3><a id="More+Examples"></a>More Examples</h3><p>These examples provide more detail about how to access various Apache Hadoop services via the Apache Knox Gateway.</p>
+</table><h3><a id="More+Examples">More Examples</a> <a href="#More+Examples"><img src="markbook-section-link.png"/></a></h3><p>These examples provide more detail about how to access various Apache Hadoop services via the Apache Knox Gateway.</p>
 <ul>
   <li><a href="#WebHDFS+Examples">WebHDFS Examples</a></li>
   <li><a href="#WebHCat+Examples">WebHCat Examples</a></li>
@@ -277,38 +277,38 @@ Server: Jetty(6.1.26)
   <li><a href="#HBase+Examples">HBase Examples</a></li>
   <li><a href="#Hive+Examples">Hive Examples</a></li>
   <li><a href="#Yarn+Examples">Yarn Examples</a></li>
-</ul><h3><a id="Gateway+Samples"></a>Gateway Samples</h3><p>The purpose of the samples within the {GATEWAY_HOME}/samples directory is to demonstrate the capabilities of the Apache Knox Gateway to provide access to the numerous APIs that are available from the service components of a Hadoop cluster.</p><p>Depending on exactly how your Knox installation was done, there will be some number of steps required in order fully install and configure the samples for use.</p><p>This section will help describe the assumptions of the samples and the steps to get them to work in a couple of different deployment scenarios.</p><h4><a id="Assumptions+of+the+Samples"></a>Assumptions of the Samples</h4><p>The samples were initially written with the intent of working out of the box for the various Hadoop demo environments that are deployed as a single node cluster inside of a VM. The following assumptions were made from that context and should be understood in order to get the samples to work in other 
 deployment scenarios:</p>
+</ul><h3><a id="Gateway+Samples">Gateway Samples</a> <a href="#Gateway+Samples"><img src="markbook-section-link.png"/></a></h3><p>The purpose of the samples within the {GATEWAY_HOME}/samples directory is to demonstrate the capabilities of the Apache Knox Gateway to provide access to the numerous APIs that are available from the service components of a Hadoop cluster.</p><p>Depending on exactly how your Knox installation was done, there will be some number of steps required in order fully install and configure the samples for use.</p><p>This section will help describe the assumptions of the samples and the steps to get them to work in a couple of different deployment scenarios.</p><h4><a id="Assumptions+of+the+Samples">Assumptions of the Samples</a> <a href="#Assumptions+of+the+Samples"><img src="markbook-section-link.png"/></a></h4><p>The samples were initially written with the intent of working out of the box for the various Hadoop demo environments that are deployed as a single no
 de cluster inside of a VM. The following assumptions were made from that context and should be understood in order to get the samples to work in other deployment scenarios:</p>
 <ul>
   <li>That there is a valid java JDK on the PATH for executing the samples</li>
   <li>The Knox Demo LDAP server is running on localhost and port 33389 which is the default port for the ApacheDS LDAP server.</li>
   <li>That the LDAP directory in use has a set of demo users provisioned with the convention of username and username&ldquo;-password&rdquo; as the password. Most of the samples have some variation of this pattern with &ldquo;guest&rdquo; and &ldquo;guest-password&rdquo;.</li>
   <li>That the Knox Gateway instance is running on the same machine which you will be running the samples from - therefore &ldquo;localhost&rdquo; and that the default port of &ldquo;8443&rdquo; is being used.</li>
   <li>Finally, that there is a properly provisioned sandbox.xml topology in the {GATEWAY_HOME}/conf/topologies directory that is configured to point to the actual host and ports of running service components.</li>
-</ul><h4><a id="Steps+for+Demo+Single+Node+Clusters"></a>Steps for Demo Single Node Clusters</h4><p>There should be little to do if anything in a demo environment that has been provisioned with illustrating the use of Apache Knox.</p><p>However, the following items will be worth ensuring before you start:</p>
+</ul><h4><a id="Steps+for+Demo+Single+Node+Clusters">Steps for Demo Single Node Clusters</a> <a href="#Steps+for+Demo+Single+Node+Clusters"><img src="markbook-section-link.png"/></a></h4><p>There should be little to do if anything in a demo environment that has been provisioned with illustrating the use of Apache Knox.</p><p>However, the following items will be worth ensuring before you start:</p>
 <ol>
   <li>The sandbox.xml topology is configured properly for the deployed services</li>
   <li>That there is an LDAP server running with guest/guest-password user available in the directory</li>
-</ol><h4><a id="Steps+for+Ambari+Deployed+Knox+Gateway"></a>Steps for Ambari Deployed Knox Gateway</h4><p>Apache Knox instances that are under the management of Ambari are generally assumed not to be demo instances. These instances are in place to facilitate development, testing or production Hadoop clusters.</p><p>The Knox samples can however be made to work with Ambari managed Knox instances with a few steps:</p>
+</ol><h4><a id="Steps+for+Ambari+Deployed+Knox+Gateway">Steps for Ambari Deployed Knox Gateway</a> <a href="#Steps+for+Ambari+Deployed+Knox+Gateway"><img src="markbook-section-link.png"/></a></h4><p>Apache Knox instances that are under the management of Ambari are generally assumed not to be demo instances. These instances are in place to facilitate development, testing or production Hadoop clusters.</p><p>The Knox samples can however be made to work with Ambari managed Knox instances with a few steps:</p>
 <ol>
   <li>You need to have ssh access to the environment in order for the localhost assumption within the samples to be valid.</li>
   <li>The Knox Demo LDAP Server is started - you can start it from Ambari</li>
   <li>The default.xml topology file can be copied to sandbox.xml in order to satisfy the topology name assumption in the samples.</li>
   <li><p>Be sure to use an actual Java JRE to run the sample with something like:</p><p>/usr/jdk64/jdk1.7.0_67/bin/java -jar bin/shell.jar samples/ExampleWebHdfsLs.groovy</p></li>
-</ol><h4><a id="Steps+for+a+Manually+Installed+Knox+Gateway"></a>Steps for a Manually Installed Knox Gateway</h4><p>For manually installed Knox instances, there is really no way for the installer to know how to configure the topology file for you.</p><p>Essentially, these steps are identical to the Amabari deployed instance except that #3 should be replaced with the configuration of the ootb sandbox.xml to point the configuration at the proper hosts and ports.</p>
+</ol><h4><a id="Steps+for+a+Manually+Installed+Knox+Gateway">Steps for a Manually Installed Knox Gateway</a> <a href="#Steps+for+a+Manually+Installed+Knox+Gateway"><img src="markbook-section-link.png"/></a></h4><p>For manually installed Knox instances, there is really no way for the installer to know how to configure the topology file for you.</p><p>Essentially, these steps are identical to the Amabari deployed instance except that #3 should be replaced with the configuration of the ootb sandbox.xml to point the configuration at the proper hosts and ports.</p>
 <ol>
   <li>You need to have ssh access to the environment in order for the localhost assumption within the samples to be valid.</li>
   <li>The Knox Demo LDAP Server is started - you can start it from Ambari</li>
   <li>Change the hosts and ports within the {GATEWAY_HOME}/conf/topologies/sandbox.xml to reflect your actual cluster service locations.</li>
   <li><p>Be sure to use an actual Java JRE to run the sample with something like:</p><p>/usr/jdk64/jdk1.7.0_67/bin/java -jar bin/shell.jar samples/ExampleWebHdfsLs.groovy</p></li>
-</ol><h2><a id="Gateway+Details"></a>Gateway Details</h2><p>This section describes the details of the Knox Gateway itself. Including: </p>
+</ol><h2><a id="Gateway+Details">Gateway Details</a> <a href="#Gateway+Details"><img src="markbook-section-link.png"/></a></h2><p>This section describes the details of the Knox Gateway itself. Including: </p>
 <ul>
   <li>How URLs are mapped between a gateway that services multiple Hadoop clusters and the clusters themselves</li>
   <li>How the gateway is configured through gateway-site.xml and cluster specific topology files</li>
   <li>How to configure the various policy enfocement provider features such as authentication, authorization, auditing, hostmapping, etc.</li>
-</ul><h3><a id="URL+Mapping"></a>URL Mapping</h3><p>The gateway functions much like a reverse proxy. As such, it maintains a mapping of URLs that are exposed externally by the gateway to URLs that are provided by the Hadoop cluster.</p><h4><a id="Default+Topology+URLs"></a>Default Topology URLs</h4><p>In order to provide compatibility with the Hadoop java client and existing CLI tools, the Knox Gateway has provided a feature called the Default Topology. This refers to a topology deployment that will be able to route URLs without the additional context that the gateway uses for differentiating from one Hadoop cluster to another. This allows the URLs to match those used by existing clients for that may access webhdfs through the Hadoop file system abstraction.</p><p>When a topology file is deployed with a file name that matches the configured default topology name, a specialized mapping for URLs is installed for that particular topology. This allows the URLs that are expected by the e
 xisting Hadoop CLIs for webhdfs to be used in interacting with the specific Hadoop cluster that is represented by the default topology file.</p><p>The configuration for the default topology name is found in gateway-site.xml as a property called: &ldquo;default.app.topology.name&rdquo;.</p><p>The default value for this property is &ldquo;sandbox&rdquo;.</p><p>Therefore, when deploying the sandbox.xml topology, both of the following example URLs work for the same underlying Hadoop cluster:</p>
+</ul><h3><a id="URL+Mapping">URL Mapping</a> <a href="#URL+Mapping"><img src="markbook-section-link.png"/></a></h3><p>The gateway functions much like a reverse proxy. As such, it maintains a mapping of URLs that are exposed externally by the gateway to URLs that are provided by the Hadoop cluster.</p><h4><a id="Default+Topology+URLs">Default Topology URLs</a> <a href="#Default+Topology+URLs"><img src="markbook-section-link.png"/></a></h4><p>In order to provide compatibility with the Hadoop java client and existing CLI tools, the Knox Gateway has provided a feature called the Default Topology. This refers to a topology deployment that will be able to route URLs without the additional context that the gateway uses for differentiating from one Hadoop cluster to another. This allows the URLs to match those used by existing clients for that may access webhdfs through the Hadoop file system abstraction.</p><p>When a topology file is deployed with a file name that matches the configured de
 fault topology name, a specialized mapping for URLs is installed for that particular topology. This allows the URLs that are expected by the existing Hadoop CLIs for webhdfs to be used in interacting with the specific Hadoop cluster that is represented by the default topology file.</p><p>The configuration for the default topology name is found in gateway-site.xml as a property called: &ldquo;default.app.topology.name&rdquo;.</p><p>The default value for this property is &ldquo;sandbox&rdquo;.</p><p>Therefore, when deploying the sandbox.xml topology, both of the following example URLs work for the same underlying Hadoop cluster:</p>
 <pre><code>https://{gateway-host}:{gateway-port}/webhdfs
 https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/webhdfs
-</code></pre><p>These default topology URLs exist for all of the services in the topology.</p><h4><a id="Fully+Qualified+URLs"></a>Fully Qualified URLs</h4><p>Examples of mappings for the WebHDFS, WebHCat, Oozie and Stargate/HBase are shown below. These mapping are generated from the combination of the gateway configuration file (i.e. <code>{GATEWAY_HOME}/conf/gateway-site.xml</code>) and the cluster topology descriptors (e.g. <code>{GATEWAY_HOME}/conf/topologies/{cluster-name}.xml</code>). The port numbers show for the Cluster URLs represent the default ports for these services. The actual port number may be different for a given cluster.</p>
+</code></pre><p>These default topology URLs exist for all of the services in the topology.</p><h4><a id="Fully+Qualified+URLs">Fully Qualified URLs</a> <a href="#Fully+Qualified+URLs"><img src="markbook-section-link.png"/></a></h4><p>Examples of mappings for the WebHDFS, WebHCat, Oozie and Stargate/HBase are shown below. These mapping are generated from the combination of the gateway configuration file (i.e. <code>{GATEWAY_HOME}/conf/gateway-site.xml</code>) and the cluster topology descriptors (e.g. <code>{GATEWAY_HOME}/conf/topologies/{cluster-name}.xml</code>). The port numbers show for the Cluster URLs represent the default ports for these services. The actual port number may be different for a given cluster.</p>
 <ul>
   <li>WebHDFS
   <ul>
@@ -335,7 +335,7 @@ https://{gateway-host}:{gateway-port}/{g
     <li>Gateway: <code>jdbc:hive2://{gateway-host}:{gateway-port}/;ssl=true;sslTrustStore={gateway-trust-store-path};trustStorePassword={gateway-trust-store-password}?hive.server2.transport.mode=http;hive.server2.thrift.http.path={gateway-path}/{cluster-name}/hive</code></li>
     <li>Cluster: <code>http://{hive-host}:10001/cliservice</code></li>
   </ul></li>
-</ul><p>The values for <code>{gateway-host}</code>, <code>{gateway-port}</code>, <code>{gateway-path}</code> are provided via the gateway configuration file (i.e. <code>{GATEWAY_HOME}/conf/gateway-site.xml</code>).</p><p>The value for <code>{cluster-name}</code> is derived from the file name of the cluster topology descriptor (e.g. <code>{GATEWAY_HOME}/deployments/{cluster-name}.xml</code>).</p><p>The value for <code>{webhdfs-host}</code>, <code>{webhcat-host}</code>, <code>{oozie-host}</code>, <code>{hbase-host}</code> and <code>{hive-host}</code> are provided via the cluster topology descriptor (e.g. <code>{GATEWAY_HOME}/conf/topologies/{cluster-name}.xml</code>).</p><p>Note: The ports 50070, 50111, 11000, 60080 (default 8080) and 10001 are the defaults for WebHDFS, WebHCat, Oozie, Stargate/HBase and Hive respectively. Their values can also be provided via the cluster topology descriptor if your Hadoop cluster uses different ports.</p><h3><a id="Configuration"></a>Configuration</h
 3><h4><a id="Topology+Descriptors"></a>Topology Descriptors</h4><p>The topology descriptor files provide the gateway with per-cluster configuration information. This includes configuration for both the providers within the gateway and the services within the Hadoop cluster. These files are located in <code>{GATEWAY_HOME}/conf/topologies</code>. The general outline of this document looks like this.</p>
+</ul><p>The values for <code>{gateway-host}</code>, <code>{gateway-port}</code>, <code>{gateway-path}</code> are provided via the gateway configuration file (i.e. <code>{GATEWAY_HOME}/conf/gateway-site.xml</code>).</p><p>The value for <code>{cluster-name}</code> is derived from the file name of the cluster topology descriptor (e.g. <code>{GATEWAY_HOME}/deployments/{cluster-name}.xml</code>).</p><p>The value for <code>{webhdfs-host}</code>, <code>{webhcat-host}</code>, <code>{oozie-host}</code>, <code>{hbase-host}</code> and <code>{hive-host}</code> are provided via the cluster topology descriptor (e.g. <code>{GATEWAY_HOME}/conf/topologies/{cluster-name}.xml</code>).</p><p>Note: The ports 50070, 50111, 11000, 60080 (default 8080) and 10001 are the defaults for WebHDFS, WebHCat, Oozie, Stargate/HBase and Hive respectively. Their values can also be provided via the cluster topology descriptor if your Hadoop cluster uses different ports.</p><h3><a id="Configuration">Configuration</a> <a
  href="#Configuration"><img src="markbook-section-link.png"/></a></h3><h4><a id="Topology+Descriptors">Topology Descriptors</a> <a href="#Topology+Descriptors"><img src="markbook-section-link.png"/></a></h4><p>The topology descriptor files provide the gateway with per-cluster configuration information. This includes configuration for both the providers within the gateway and the services within the Hadoop cluster. These files are located in <code>{GATEWAY_HOME}/conf/topologies</code>. The general outline of this document looks like this.</p>
 <pre><code>&lt;topology&gt;
     &lt;gateway&gt;
         &lt;provider&gt;
@@ -346,7 +346,7 @@ https://{gateway-host}:{gateway-port}/{g
 &lt;/topology&gt;
 </code></pre><p>There are typically multiple <code>&lt;provider&gt;</code> and <code>&lt;service&gt;</code> elements.</p>
 <dl><dt>/topology</dt><dd>Defines the provider and configuration and service topology for a single Hadoop cluster.</dd><dt>/topology/gateway</dt><dd>Groups all of the provider elements</dd><dt>/topology/gateway/provider</dt><dd>Defines the configuration of a specific provider for the cluster.</dd><dt>/topology/service</dt><dd>Defines the location of a specific Hadoop service within the Hadoop cluster.</dd>
-</dl><h5><a id="Provider+Configuration"></a>Provider Configuration</h5><p>Provider configuration is used to customize the behavior of a particular gateway feature. The general outline of a provider element looks like this.</p>
+</dl><h5><a id="Provider+Configuration">Provider Configuration</a> <a href="#Provider+Configuration"><img src="markbook-section-link.png"/></a></h5><p>Provider configuration is used to customize the behavior of a particular gateway feature. The general outline of a provider element looks like this.</p>
 <pre><code>&lt;provider&gt;
     &lt;role&gt;authentication&lt;/role&gt;
     &lt;name&gt;ShiroProvider&lt;/name&gt;
@@ -358,14 +358,14 @@ https://{gateway-host}:{gateway-port}/{g
 &lt;/provider&gt;
 </code></pre>
 <dl><dt>/topology/gateway/provider</dt><dd>Groups information for a specific provider.</dd><dt>/topology/gateway/provider/role</dt><dd>Defines the role of a particular provider. There are a number of pre-defined roles used by out-of-the-box provider plugins for the gateay. These roles are: authentication, identity-assertion, authentication, rewrite and hostmap</dd><dt>/topology/gateway/provider/name</dt><dd>Defines the name of the provider for which this configuration applies. There can be multiple provider implementations for a given role. Specifying the name is used identify which particular provider is being configured. Typically each topology descriptor should contain only one provider for each role but there are exceptions.</dd><dt>/topology/gateway/provider/enabled</dt><dd>Allows a particular provider to be enabled or disabled via <code>true</code> or <code>false</code> respectively. When a provider is disabled any filters associated with that provider are excluded from the pr
 ocessing chain.</dd><dt>/topology/gateway/provider/param</dt><dd>These elements are used to supply provider configuration. There can be zero or more of these per provider.</dd><dt>/topology/gateway/provider/param/name</dt><dd>The name of a parameter to pass to the provider.</dd><dt>/topology/gateway/provider/param/value</dt><dd>The value of a parameter to pass to the provider.</dd>
-</dl><h5><a id="Service+Configuration"></a>Service Configuration</h5><p>Service configuration is used to specify the location of services within the Hadoop cluster. The general outline of a service element looks like this.</p>
+</dl><h5><a id="Service+Configuration">Service Configuration</a> <a href="#Service+Configuration"><img src="markbook-section-link.png"/></a></h5><p>Service configuration is used to specify the location of services within the Hadoop cluster. The general outline of a service element looks like this.</p>
 <pre><code>&lt;service&gt;
     &lt;role&gt;WEBHDFS&lt;/role&gt;
     &lt;url&gt;http://localhost:50070/webhdfs&lt;/url&gt;
 &lt;/service&gt;
 </code></pre>
 <dl><dt>/topology/service</dt><dd>Provider information about a particular service within the Hadoop cluster. Not all services are necessarily exposed as gateway endpoints.</dd><dt>/topology/service/role</dt><dd>Identifies the role of this service. Currently supported roles are: WEBHDFS, WEBHCAT, WEBHBASE, OOZIE, HIVE, NAMENODE, JOBTRACKER, RESOURCEMANAGER Additional service roles can be supported via plugins.</dd><dt>topology/service/url</dt><dd>The URL identifying the location of a particular service within the Hadoop cluster.</dd>
-</dl><h4><a id="Hostmap+Provider"></a>Hostmap Provider</h4><p>The purpose of the Hostmap provider is to handle situations where host are known by one name within the cluster and another name externally. This frequently occurs when virtual machines are used and in particular when using cloud hosting services. Currently, the Hostmap provider is configured as part of the topology file. The basic structure is shown below.</p>
+</dl><h4><a id="Hostmap+Provider">Hostmap Provider</a> <a href="#Hostmap+Provider"><img src="markbook-section-link.png"/></a></h4><p>The purpose of the Hostmap provider is to handle situations where host are known by one name within the cluster and another name externally. This frequently occurs when virtual machines are used and in particular when using cloud hosting services. Currently, the Hostmap provider is configured as part of the topology file. The basic structure is shown below.</p>
 <pre><code>&lt;topology&gt;
     &lt;gateway&gt;
         ...
@@ -379,7 +379,7 @@ https://{gateway-host}:{gateway-port}/{g
     &lt;/gateway&gt;
     ...
 &lt;/topology&gt;
-</code></pre><p>This mapping is required because the Hadoop servies running within the cluster are unaware that they are being accessed from outside the cluster. Therefore URLs returned as part of REST API responses will typically contain internal host names. Since clients outside the cluster will be unable to resolve those host name they must be mapped to external host names.</p><h5><a id="Hostmap+Provider+Example+-+EC2"></a>Hostmap Provider Example - EC2</h5><p>Consider an EC2 example where two VMs have been allocated. Each VM has an external host name by which it can be accessed via the internet. However the EC2 VM is unaware of this external host name and instead is configured with the internal host name.</p>
+</code></pre><p>This mapping is required because the Hadoop servies running within the cluster are unaware that they are being accessed from outside the cluster. Therefore URLs returned as part of REST API responses will typically contain internal host names. Since clients outside the cluster will be unable to resolve those host name they must be mapped to external host names.</p><h5><a id="Hostmap+Provider+Example+-+EC2">Hostmap Provider Example - EC2</a> <a href="#Hostmap+Provider+Example+-+EC2"><img src="markbook-section-link.png"/></a></h5><p>Consider an EC2 example where two VMs have been allocated. Each VM has an external host name by which it can be accessed via the internet. However the EC2 VM is unaware of this external host name and instead is configured with the internal host name.</p>
 <pre><code>External HOSTNAMES:
 ec2-23-22-31-165.compute-1.amazonaws.com
 ec2-23-23-25-10.compute-1.amazonaws.com
@@ -408,7 +408,7 @@ ip-10-39-107-209.ec2.internal
     &lt;/gateway&gt;
     ...
 &lt;/topology&gt;
-</code></pre><h5><a id="Hostmap+Provider+Example+-+Sandbox"></a>Hostmap Provider Example - Sandbox</h5><p>The Hortonworks Sandbox 2.x poses a different challenge for host name mapping. This version of the Sandbox uses port mapping to make the Sandbox VM appear as though it is accessible via localhost. However the Sandbox VM is internally configured to consider sandbox.hortonworks.com as the host name. So from the perspective of a client accessing Sandbox the external host name is localhost. The Hostmap configuration required to allow access to Sandbox from the host operating system is this.</p>
+</code></pre><h5><a id="Hostmap+Provider+Example+-+Sandbox">Hostmap Provider Example - Sandbox</a> <a href="#Hostmap+Provider+Example+-+Sandbox"><img src="markbook-section-link.png"/></a></h5><p>The Hortonworks Sandbox 2.x poses a different challenge for host name mapping. This version of the Sandbox uses port mapping to make the Sandbox VM appear as though it is accessible via localhost. However the Sandbox VM is internally configured to consider sandbox.hortonworks.com as the host name. So from the perspective of a client accessing Sandbox the external host name is localhost. The Hostmap configuration required to allow access to Sandbox from the host operating system is this.</p>
 <pre><code>&lt;topology&gt;
     &lt;gateway&gt;
         ...
@@ -422,9 +422,9 @@ ip-10-39-107-209.ec2.internal
     &lt;/gateway&gt;
     ...
 &lt;/topology&gt;
-</code></pre><h5><a id="Hostmap+Provider+Configuration"></a>Hostmap Provider Configuration</h5><p>Details about each provider configuration element is enumerated below.</p>
+</code></pre><h5><a id="Hostmap+Provider+Configuration">Hostmap Provider Configuration</a> <a href="#Hostmap+Provider+Configuration"><img src="markbook-section-link.png"/></a></h5><p>Details about each provider configuration element is enumerated below.</p>
 <dl><dt>topology/gateway/provider/role</dt><dd>The role for a Hostmap provider must always be <code>hostmap</code>.</dd><dt>topology/gateway/provider/name</dt><dd>The Hostmap provider supplied out-of-the-box is selected via the name <code>static</code>.</dd><dt>topology/gateway/provider/enabled</dt><dd>Host mapping can be enabled or disabled by providing <code>true</code> or <code>false</code>.</dd><dt>topology/gateway/provider/param</dt><dd>Host mapping is configured by providing parameters for each external to internal mapping.</dd><dt>topology/gateway/provider/param/name</dt><dd>The parameter names represent an external host names associated with the internal host names provided by the value element. This can be a comma separated list of host names that all represent the same physical host. When mapping from internal to external host name the first external host name in the list is used.</dd><dt>topology/gateway/provider/param/value</dt><dd>The parameter values represent the inte
 rnal host names associated with the external host names provider by the name element. This can be a comma separated list of host names that all represent the same physical host. When mapping from external to internal host names the first internal host name in the list is used.</dd>
-</dl><h4><a id="Logging"></a>Logging</h4><p>If necessary you can enable additional logging by editing the <code>log4j.properties</code> file in the <code>conf</code> directory. Changing the rootLogger value from <code>ERROR</code> to <code>DEBUG</code> will generate a large amount of debug logging. A number of useful, more fine loggers are also provided in the file.</p><h4><a id="Java+VM+Options"></a>Java VM Options</h4><p>TODO - Java VM options doc.</p><h4><a id="Persisting+the+Master+Secret"></a>Persisting the Master Secret</h4><p>The master secret is required to start the server. This secret is used to access secured artifacts by the gateway instance. Keystore, trust stores and credential stores are all protected with the master secret.</p><p>You may persist the master secret by supplying the <em>-persist-master</em> switch at startup. This will result in a warning indicating that persisting the secret is less secure than providing it at startup. We do make some provisions in ord
 er to protect the persisted password.</p><p>It is encrypted with AES 128 bit encryption and where possible the file permissions are set to only be accessible by the user that the gateway is running as.</p><p>After persisting the secret, ensure that the file at config/security/master has the appropriate permissions set for your environment. This is probably the most important layer of defense for master secret. Do not assume that the encryption if sufficient protection.</p><p>A specific user should be created to run the gateway this user will be the only user with permissions for the persisted master file.</p><p>See the Knox CLI section for descriptions of the command line utilties related to the master secret.</p><h4><a id="Management+of+Security+Artifacts"></a>Management of Security Artifacts</h4><p>There are a number of artifacts that are used by the gateway in ensuring the security of wire level communications, access to protected resources and the encryption of sensitive data. T
 hese artifacts can be managed from outside of the gateway instances or generated and populated by the gateway instance itself.</p><p>The following is a description of how this is coordinated with both standalone (development, demo, etc) gateway instances and instances as part of a cluster of gateways in mind.</p><p>Upon start of the gateway server we:</p>
+</dl><h4><a id="Logging">Logging</a> <a href="#Logging"><img src="markbook-section-link.png"/></a></h4><p>If necessary you can enable additional logging by editing the <code>log4j.properties</code> file in the <code>conf</code> directory. Changing the rootLogger value from <code>ERROR</code> to <code>DEBUG</code> will generate a large amount of debug logging. A number of useful, more fine loggers are also provided in the file.</p><h4><a id="Java+VM+Options">Java VM Options</a> <a href="#Java+VM+Options"><img src="markbook-section-link.png"/></a></h4><p>TODO - Java VM options doc.</p><h4><a id="Persisting+the+Master+Secret">Persisting the Master Secret</a> <a href="#Persisting+the+Master+Secret"><img src="markbook-section-link.png"/></a></h4><p>The master secret is required to start the server. This secret is used to access secured artifacts by the gateway instance. Keystore, trust stores and credential stores are all protected with the master secret.</p><p>You may persist the master
  secret by supplying the <em>-persist-master</em> switch at startup. This will result in a warning indicating that persisting the secret is less secure than providing it at startup. We do make some provisions in order to protect the persisted password.</p><p>It is encrypted with AES 128 bit encryption and where possible the file permissions are set to only be accessible by the user that the gateway is running as.</p><p>After persisting the secret, ensure that the file at config/security/master has the appropriate permissions set for your environment. This is probably the most important layer of defense for master secret. Do not assume that the encryption if sufficient protection.</p><p>A specific user should be created to run the gateway this user will be the only user with permissions for the persisted master file.</p><p>See the Knox CLI section for descriptions of the command line utilties related to the master secret.</p><h4><a id="Management+of+Security+Artifacts">Management of 
 Security Artifacts</a> <a href="#Management+of+Security+Artifacts"><img src="markbook-section-link.png"/></a></h4><p>There are a number of artifacts that are used by the gateway in ensuring the security of wire level communications, access to protected resources and the encryption of sensitive data. These artifacts can be managed from outside of the gateway instances or generated and populated by the gateway instance itself.</p><p>The following is a description of how this is coordinated with both standalone (development, demo, etc) gateway instances and instances as part of a cluster of gateways in mind.</p><p>Upon start of the gateway server we:</p>
 <ol>
   <li>Look for an identity store at <code>data/security/keystores/gateway.jks</code>.  The identity store contains the certificate and private key used to represent the identity of the server for SSL connections and signature creation.
   <ul>
@@ -448,7 +448,7 @@ ip-10-39-107-209.ec2.internal
   <li>Using a single gateway instance as a master instance the artifacts can be generated or placed into the expected location and then replicated across all of the slave instances before startup.</li>
   <li>Using an NFS mount as a central location for the artifacts would provide a single source of truth without the need to replicate them over the network. Of course, NFS mounts have their own challenges.</li>
   <li>Using the KnoxCLI to create and manage the security artifacts.</li>
-</ol><p>See the Knox CLI section for descriptions of the command line utilties related to the security artifact management.</p><h4><a id="Keystores"></a>Keystores</h4><p>In order to provide your own certificate for use by the gateway, you will need to either import an existing key pair into a Java keystore or generate a self-signed cert using the Java keytool.</p><h5><a id="Importing+a+key+pair+into+a+Java+keystore"></a>Importing a key pair into a Java keystore</h5><p>One way to accomplish this is to start with a PKCS12 store for your key pair and then convert it to a Java keystore or JKS.</p><p>The following example uses openssl to create a PKCS12 encoded store from your provided certificate and private key that are in PEM format.</p>
+</ol><p>See the Knox CLI section for descriptions of the command line utilties related to the security artifact management.</p><h4><a id="Keystores">Keystores</a> <a href="#Keystores"><img src="markbook-section-link.png"/></a></h4><p>In order to provide your own certificate for use by the gateway, you will need to either import an existing key pair into a Java keystore or generate a self-signed cert using the Java keytool.</p><h5><a id="Importing+a+key+pair+into+a+Java+keystore">Importing a key pair into a Java keystore</a> <a href="#Importing+a+key+pair+into+a+Java+keystore"><img src="markbook-section-link.png"/></a></h5><p>One way to accomplish this is to start with a PKCS12 store for your key pair and then convert it to a Java keystore or JKS.</p><p>The following example uses openssl to create a PKCS12 encoded store from your provided certificate and private key that are in PEM format.</p>
 <pre><code>openssl pkcs12 -export -in cert.pem -inkey key.pem &gt; server.p12
 </code></pre><p>The next example converts the PKCS12 store into a Java keystore (JKS). It should prompt you for the keystore and key passwords for the destination keystore. You must use the master-secret for the keystore password and keep track of the password that you use for the key passphrase.</p>
 <pre><code>keytool -importkeystore -srckeystore {server.p12} -destkeystore gateway.jks -srcstoretype pkcs12
@@ -459,10 +459,10 @@ ip-10-39-107-209.ec2.internal
   <li><p>the passwords for the keystore and the imported key may both be set to the master secret for the gateway install. You can change the key passphrase after import using keytool as well. You may need to do this in order to provision the password in the credential store as described later in this section. For example:</p><p>keytool -keypasswd -alias gateway-identity -keystore gateway.jks</p></li>
 </ol><p>NOTE: The password for the keystore as well as that of the imported key may be the master secret for the gateway instance or you may set the gateway-identity-passphrase alias using the Knox CLI to the actual key passphrase. See the Knox CLI section for details.</p><p>The following will allow you to provision the passphrase for the private key that you set during keystore creation above - it will prompt you for the actual passphrase.</p>
 <pre><code>bin/knoxcli.sh create-alias gateway-identity-passphrase
-</code></pre><h5><a id="Generating+a+self-signed+cert+for+use+in+testing+or+development+environments"></a>Generating a self-signed cert for use in testing or development environments</h5>
+</code></pre><h5><a id="Generating+a+self-signed+cert+for+use+in+testing+or+development+environments">Generating a self-signed cert for use in testing or development environments</a> <a href="#Generating+a+self-signed+cert+for+use+in+testing+or+development+environments"><img src="markbook-section-link.png"/></a></h5>
 <pre><code>keytool -genkey -keyalg RSA -alias gateway-identity -keystore gateway.jks \
     -storepass {master-secret} -validity 360 -keysize 2048
-</code></pre><p>Keytool will prompt you for a number of elements used will comprise the distiniguished name (DN) within your certificate. </p><p><em>NOTE:</em> When it prompts you for your First and Last name be sure to type in the hostname of the machine that your gateway instance will be running on. This is used by clients during hostname verification to ensure that the presented certificate matches the hostname that was used in the URL for the connection - so they need to match.</p><p><em>NOTE:</em> When it prompts for the key password just press enter to ensure that it is the same as the keystore password. Which as was described earlier must match the master secret for the gateway instance. Alternatively, you can set it to another passphrase - take note of it and set the gateway-identity-passphrase alias to that passphrase using the Knox CLI.</p><p>See the Knox CLI section for descriptions of the command line utilties related to the management of the keystores.</p><h5><a id="Usi
 ng+a+CA+Signed+Key+Pair"></a>Using a CA Signed Key Pair</h5><p>For certain deployments a certificate key pair that is signed by a trusted certificate authority is required. There are a number of different ways in which these certificates are acquired and can be converted and imported into the Apache Knox keystore.</p><p>The following steps have been used to do this and are provided here for guidance in your installation. You may have to adjust according to your environment.</p><p>General steps:</p>
+</code></pre><p>Keytool will prompt you for a number of elements used will comprise the distiniguished name (DN) within your certificate. </p><p><em>NOTE:</em> When it prompts you for your First and Last name be sure to type in the hostname of the machine that your gateway instance will be running on. This is used by clients during hostname verification to ensure that the presented certificate matches the hostname that was used in the URL for the connection - so they need to match.</p><p><em>NOTE:</em> When it prompts for the key password just press enter to ensure that it is the same as the keystore password. Which as was described earlier must match the master secret for the gateway instance. Alternatively, you can set it to another passphrase - take note of it and set the gateway-identity-passphrase alias to that passphrase using the Knox CLI.</p><p>See the Knox CLI section for descriptions of the command line utilties related to the management of the keystores.</p><h5><a id="Usi
 ng+a+CA+Signed+Key+Pair">Using a CA Signed Key Pair</a> <a href="#Using+a+CA+Signed+Key+Pair"><img src="markbook-section-link.png"/></a></h5><p>For certain deployments a certificate key pair that is signed by a trusted certificate authority is required. There are a number of different ways in which these certificates are acquired and can be converted and imported into the Apache Knox keystore.</p><p>The following steps have been used to do this and are provided here for guidance in your installation. You may have to adjust according to your environment.</p><p>General steps:</p>
 <ol>
   <li>stop gateway and back up all files in /var/lib/knox/data/security/keystores<br/>gateway.sh stop</li>
   <li>create new master key for knox and persist, the master key will be referred to in following steps as $master-key<br/>knoxcli.sh create-master -force</li>
@@ -493,13 +493,13 @@ ip-10-39-107-209.ec2.internal
   <ul>
     <li>curl &ndash;cacert supwin12ad.cer -u hdptester:hadoop -X GET &lsquo;<a href="https://$fqdn_knox:8443/gateway/$topologyname/webhdfs/v1/tmp?op=LISTSTATUS">https://$fqdn_knox:8443/gateway/$topologyname/webhdfs/v1/tmp?op=LISTSTATUS</a>&rsquo; or can verify through client browser which already has the corporate CA cert installed.</li>
   </ul></li>
-</ol><h5><a id="Credential+Store"></a>Credential Store</h5><p>Whenever you provide your own keystore with either a self-signed cert or an issued certificate signed by a trusted authority, you will need to set an alias for the gateway-identity-passphrase or create an empty credential store. This is necessary for the current release in order for the system to determine the correct password for the keystore and the key.</p><p>The credential stores in Knox use the JCEKS keystore type as it allows for the storage of general secrets in addition to certificates.</p><p>Keytool may be used to create credential stores but the Knox CLI section details how to create aliases. These aliases are managed within credential stores which are created by the CLI as needed. The simplest approach is to create the gateway-identity-passpharse alias with the Knox CLI. This will create the credential store if it doesn&rsquo;t already exist and add the key passphrase.</p><p>See the Knox CLI section for descrip
 tions of the command line utilties related to the management of the credential stores.</p><h5><a id="Provisioning+of+Keystores"></a>Provisioning of Keystores</h5><p>Once you have created these keystores you must move them into place for the gateway to discover them and use them to represent its identity for SSL connections. This is done by copying the keystores to the <code>{GATEWAY_HOME}/data/security/keystores</code> directory for your gateway install.</p><h4><a id="Summary+of+Secrets+to+be+Managed"></a>Summary of Secrets to be Managed</h4>
+</ol><h5><a id="Credential+Store">Credential Store</a> <a href="#Credential+Store"><img src="markbook-section-link.png"/></a></h5><p>Whenever you provide your own keystore with either a self-signed cert or an issued certificate signed by a trusted authority, you will need to set an alias for the gateway-identity-passphrase or create an empty credential store. This is necessary for the current release in order for the system to determine the correct password for the keystore and the key.</p><p>The credential stores in Knox use the JCEKS keystore type as it allows for the storage of general secrets in addition to certificates.</p><p>Keytool may be used to create credential stores but the Knox CLI section details how to create aliases. These aliases are managed within credential stores which are created by the CLI as needed. The simplest approach is to create the gateway-identity-passpharse alias with the Knox CLI. This will create the credential store if it doesn&rsquo;t already exist
  and add the key passphrase.</p><p>See the Knox CLI section for descriptions of the command line utilties related to the management of the credential stores.</p><h5><a id="Provisioning+of+Keystores">Provisioning of Keystores</a> <a href="#Provisioning+of+Keystores"><img src="markbook-section-link.png"/></a></h5><p>Once you have created these keystores you must move them into place for the gateway to discover them and use them to represent its identity for SSL connections. This is done by copying the keystores to the <code>{GATEWAY_HOME}/data/security/keystores</code> directory for your gateway install.</p><h4><a id="Summary+of+Secrets+to+be+Managed">Summary of Secrets to be Managed</a> <a href="#Summary+of+Secrets+to+be+Managed"><img src="markbook-section-link.png"/></a></h4>
 <ol>
   <li>Master secret - the same for all gateway instances in a cluster of gateways</li>
   <li>All security related artifacts are protected with the master secret</li>
   <li>Secrets used by the gateway itself are stored within the gateway credential store and are the same across all gateway instances in the cluster of gateways</li>
   <li>Secrets used by providers within cluster topologies are stored in topology specific credential stores and are the same for the same topology across the cluster of gateway instances.  However, they are specific to the topology - so secrets for one hadoop cluster are different from those of another.  This allows for fail-over from one gateway instance to another even when encryption is being used while not allowing the compromise of one encryption key to expose the data for all clusters.</li>
-</ol><p>NOTE: the SSL certificate will need special consideration depending on the type of certificate. Wildcard certs may be able to be shared across all gateway instances in a cluster. When certs are dedicated to specific machines the gateway identity store will not be able to be blindly replicated as host name verification problems will ensue. Obviously, trust-stores will need to be taken into account as well.</p><h3><a id="Knox+CLI"></a>Knox CLI</h3><p>The Knox CLI is a command line utility for management of various aspects of the Knox deployment. It is primarily concerned with the management of the security artifacts for the gateway instance and each of the deployed topologies or hadoop clusters that are gated by the Knox Gateway instance.</p><p>The various security artifacts are also generated and populated automatically by the Knox Gateway runtime when they are not found at startup. The assumptions made in those cases are appropriate for a test or development gateway instance
  and assume &lsquo;localhost&rsquo; for hostname specific activities. For production deployments the use of the CLI may aid in managing some production deployments.</p><p>The knoxcli.sh script is located in the {GATEWAY_HOME}/bin directory.</p><h4><a id="Help"></a>Help</h4><h5><a id="knoxcli.sh+[--help]"></a>knoxcli.sh [&ndash;help]</h5><p>prints help for all commands</p><h4><a id="Knox+Verison+Info"></a>Knox Verison Info</h4><h5><a id="knoxcli.sh+version+[--help]"></a>knoxcli.sh version [&ndash;help]</h5><p>Displays Knox version information.</p><h4><a id="Master+secret+persistence"></a>Master secret persistence</h4><h5><a id="knoxcli.sh+create-master+[--force][--help]"></a>knoxcli.sh create-master [&ndash;force][&ndash;help]</h5><p>Creates and persists an encrypted master secret in a file within {GATEWAY_HOME}/data/security/master. </p><p>NOTE: This command fails when there is an existing master file in the expected location. You may force it to overwrite the master file with the &
 ndash;force switch. NOTE: this will require you to change passwords protecting the keystores for the gateway identity keystores and all credential stores.</p><h4><a id="Alias+creation"></a>Alias creation</h4><h5><a id="knoxcli.sh+create-alias+n+[--cluster+c]+[--value+v]+[--generate]+[--help]"></a>knoxcli.sh create-alias n [&ndash;cluster c] [&ndash;value v] [&ndash;generate] [&ndash;help]</h5><p>Creates a password alias and stores it in a credential store within the {GATEWAY_HOME}/data/security/keystores dir. </p>
+</ol><p>NOTE: the SSL certificate will need special consideration depending on the type of certificate. Wildcard certs may be able to be shared across all gateway instances in a cluster. When certs are dedicated to specific machines the gateway identity store will not be able to be blindly replicated as host name verification problems will ensue. Obviously, trust-stores will need to be taken into account as well.</p><h3><a id="Knox+CLI">Knox CLI</a> <a href="#Knox+CLI"><img src="markbook-section-link.png"/></a></h3><p>The Knox CLI is a command line utility for management of various aspects of the Knox deployment. It is primarily concerned with the management of the security artifacts for the gateway instance and each of the deployed topologies or hadoop clusters that are gated by the Knox Gateway instance.</p><p>The various security artifacts are also generated and populated automatically by the Knox Gateway runtime when they are not found at startup. The assumptions made in those c
 ases are appropriate for a test or development gateway instance and assume &lsquo;localhost&rsquo; for hostname specific activities. For production deployments the use of the CLI may aid in managing some production deployments.</p><p>The knoxcli.sh script is located in the {GATEWAY_HOME}/bin directory.</p><h4><a id="Help">Help</a> <a href="#Help"><img src="markbook-section-link.png"/></a></h4><h5><a id="knoxcli.sh+[--help]">knoxcli.sh [&ndash;help]</a> <a href="#knoxcli.sh+[--help]"><img src="markbook-section-link.png"/></a></h5><p>prints help for all commands</p><h4><a id="Knox+Verison+Info">Knox Verison Info</a> <a href="#Knox+Verison+Info"><img src="markbook-section-link.png"/></a></h4><h5><a id="knoxcli.sh+version+[--help]">knoxcli.sh version [&ndash;help]</a> <a href="#knoxcli.sh+version+[--help]"><img src="markbook-section-link.png"/></a></h5><p>Displays Knox version information.</p><h4><a id="Master+secret+persistence">Master secret persistence</a> <a href="#Master+secret+per
 sistence"><img src="markbook-section-link.png"/></a></h4><h5><a id="knoxcli.sh+create-master+[--force][--help]">knoxcli.sh create-master [&ndash;force][&ndash;help]</a> <a href="#knoxcli.sh+create-master+[--force][--help]"><img src="markbook-section-link.png"/></a></h5><p>Creates and persists an encrypted master secret in a file within {GATEWAY_HOME}/data/security/master. </p><p>NOTE: This command fails when there is an existing master file in the expected location. You may force it to overwrite the master file with the &ndash;force switch. NOTE: this will require you to change passwords protecting the keystores for the gateway identity keystores and all credential stores.</p><h4><a id="Alias+creation">Alias creation</a> <a href="#Alias+creation"><img src="markbook-section-link.png"/></a></h4><h5><a id="knoxcli.sh+create-alias+n+[--cluster+c]+[--value+v]+[--generate]+[--help]">knoxcli.sh create-alias n [&ndash;cluster c] [&ndash;value v] [&ndash;generate] [&ndash;help]</a> <a href="
 #knoxcli.sh+create-alias+n+[--cluster+c]+[--value+v]+[--generate]+[--help]"><img src="markbook-section-link.png"/></a></h5><p>Creates a password alias and stores it in a credential store within the {GATEWAY_HOME}/data/security/keystores dir. </p>
 <table>
   <thead>
     <tr>
@@ -525,7 +525,7 @@ ip-10-39-107-209.ec2.internal
       <td>boolean flag to indicate whether the tool should just generate the value. This assumes that &ndash;value is not set - will result in error otherwise. User will not be prompted for the value when &ndash;generate is set.</td>
     </tr>
   </tbody>
-</table><h4><a id="Alias+deletion"></a>Alias deletion</h4><h5><a id="knoxcli.sh+delete-alias+n+[--cluster+c]+[--help]"></a>knoxcli.sh delete-alias n [&ndash;cluster c] [&ndash;help]</h5><p>Deletes a password and alias mapping from a credential store within {GATEWAY_HOME}/data/security/keystores. </p>
+</table><h4><a id="Alias+deletion">Alias deletion</a> <a href="#Alias+deletion"><img src="markbook-section-link.png"/></a></h4><h5><a id="knoxcli.sh+delete-alias+n+[--cluster+c]+[--help]">knoxcli.sh delete-alias n [&ndash;cluster c] [&ndash;help]</a> <a href="#knoxcli.sh+delete-alias+n+[--cluster+c]+[--help]"><img src="markbook-section-link.png"/></a></h5><p>Deletes a password and alias mapping from a credential store within {GATEWAY_HOME}/data/security/keystores. </p>
 <table>
   <thead>
     <tr>
@@ -543,7 +543,7 @@ ip-10-39-107-209.ec2.internal
       <td>name of Hadoop cluster for the cluster specific credential store otherwise assumes __gateway</td>
     </tr>
   </tbody>
-</table><h4><a id="Alias+listing"></a>Alias listing</h4><h5><a id="knoxcli.sh+list-alias+[--cluster+c]+[--help]"></a>knoxcli.sh list-alias [&ndash;cluster c] [&ndash;help]</h5><p>Lists the alias names for the credential store within {GATEWAY_HOME}/data/security/keystores. </p><p>NOTE: This command will list the aliases in lowercase which is a result of the underlying credential store implementation. Lookup of credentials is a case insensitive operation - so this is not an issue.</p>
+</table><h4><a id="Alias+listing">Alias listing</a> <a href="#Alias+listing"><img src="markbook-section-link.png"/></a></h4><h5><a id="knoxcli.sh+list-alias+[--cluster+c]+[--help]">knoxcli.sh list-alias [&ndash;cluster c] [&ndash;help]</a> <a href="#knoxcli.sh+list-alias+[--cluster+c]+[--help]"><img src="markbook-section-link.png"/></a></h5><p>Lists the alias names for the credential store within {GATEWAY_HOME}/data/security/keystores. </p><p>NOTE: This command will list the aliases in lowercase which is a result of the underlying credential store implementation. Lookup of credentials is a case insensitive operation - so this is not an issue.</p>
 <table>
   <thead>
     <tr>
@@ -557,7 +557,7 @@ ip-10-39-107-209.ec2.internal
       <td>name of Hadoop cluster for the cluster specific credential store otherwise assumes __gateway</td>
     </tr>
   </tbody>
-</table><h4><a id="Self-signed+cert+creation"></a>Self-signed cert creation</h4><h5><a id="knoxcli.sh+create-cert+[--hostname+n]+[--help]"></a>knoxcli.sh create-cert [&ndash;hostname n] [&ndash;help]</h5><p>Creates and stores a self-signed certificate to represent the identity of the gateway instance. This is stored within the {GATEWAY_HOME}/data/security/keystores/gateway.jks keystore. </p>
+</table><h4><a id="Self-signed+cert+creation">Self-signed cert creation</a> <a href="#Self-signed+cert+creation"><img src="markbook-section-link.png"/></a></h4><h5><a id="knoxcli.sh+create-cert+[--hostname+n]+[--help]">knoxcli.sh create-cert [&ndash;hostname n] [&ndash;help]</a> <a href="#knoxcli.sh+create-cert+[--hostname+n]+[--help]"><img src="markbook-section-link.png"/></a></h5><p>Creates and stores a self-signed certificate to represent the identity of the gateway instance. This is stored within the {GATEWAY_HOME}/data/security/keystores/gateway.jks keystore. </p>
 <table>
   <thead>
     <tr>
@@ -571,7 +571,7 @@ ip-10-39-107-209.ec2.internal
       <td>name of the host to be used in the self-signed certificate. This allows multi-host deployments to specify the proper hostnames for hostname verification to succeed on the client side of the SSL connection. The default is “localhost”.</td>
     </tr>
   </tbody>
-</table><h4><a id="Topology+Redeploy"></a>Topology Redeploy</h4><h5><a id="knoxcli.sh+redeploy+[--cluster+c]"></a>knoxcli.sh redeploy [&ndash;cluster c]</h5><p>Redeploys one or all of the gateway&rsquo;s clusters (a.k.a topologies).</p><h3><a id="Admin+API"></a>Admin API</h3><p>Access to the administrator functions of Knox are provided by the Admin REST API.</p><h4><a id="Admin+API+URL"></a>Admin API URL</h4><p>The URL mapping for the Knox Admin API is simple:</p>
+</table><h4><a id="Topology+Redeploy">Topology Redeploy</a> <a href="#Topology+Redeploy"><img src="markbook-section-link.png"/></a></h4><h5><a id="knoxcli.sh+redeploy+[--cluster+c]">knoxcli.sh redeploy [&ndash;cluster c]</a> <a href="#knoxcli.sh+redeploy+[--cluster+c]"><img src="markbook-section-link.png"/></a></h5><p>Redeploys one or all of the gateway&rsquo;s clusters (a.k.a topologies).</p><h3><a id="Admin+API">Admin API</a> <a href="#Admin+API"><img src="markbook-section-link.png"/></a></h3><p>Access to the administrator functions of Knox are provided by the Admin REST API.</p><h4><a id="Admin+API+URL">Admin API URL</a> <a href="#Admin+API+URL"><img src="markbook-section-link.png"/></a></h4><p>The URL mapping for the Knox Admin API is simple:</p>
 <table>
   <tbody>
     <tr>
@@ -579,17 +579,17 @@ ip-10-39-107-209.ec2.internal
       <td><code>https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/v1</code> </td>
     </tr>
   </tbody>
-</table><p>Please note that to access that admin API, the user attempting to connect must have admin credentials inside of the LDAP Server</p><h5><a id="API+Documentation"></a>API Documentation</h5><h6><a id="Operations"></a>Operations</h6>
+</table><p>Please note that to access that admin API, the user attempting to connect must have admin credentials inside of the LDAP Server</p><h5><a id="API+Documentation">API Documentation</a> <a href="#API+Documentation"><img src="markbook-section-link.png"/></a></h5><h6><a id="Operations">Operations</a> <a href="#Operations"><img src="markbook-section-link.png"/></a></h6>
 <ul>
   <li><h6>HTTP GET</h6> 1. <a href="#Server+Version">Server Version</a><br/> 2. <a href="#Topology+Collection">Topology Collection</a><br/> 3. <a href="#Topology">Topology</a></li>
   <li><h6>HTTP PUT</h6></li>
   <li><h6>HTTP DELETE</h6></li>
-</ul><h5><a id="Server+Version"></a>Server Version</h5><h6><a id="Description"></a>Description</h6><p>Calls to Knox and returns the gateway&rsquo;s current version and the version hash inside of a JSON object. </p><h6><a id="Example+Request+URL"></a>Example Request URL</h6><p><code>https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/v1/version</code> </p><h6><a id="Example+cURL+Request"></a>Example cURL Request</h6><p><code>curl -u admin:admin-password -i -k https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/v1/version</code></p><h6><a id="Response"></a>Response</h6>
+</ul><h5><a id="Server+Version">Server Version</a> <a href="#Server+Version"><img src="markbook-section-link.png"/></a></h5><h6><a id="Description">Description</a> <a href="#Description"><img src="markbook-section-link.png"/></a></h6><p>Calls to Knox and returns the gateway&rsquo;s current version and the version hash inside of a JSON object. </p><h6><a id="Example+Request+URL">Example Request URL</a> <a href="#Example+Request+URL"><img src="markbook-section-link.png"/></a></h6><p><code>https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/v1/version</code> </p><h6><a id="Example+cURL+Request">Example cURL Request</a> <a href="#Example+cURL+Request"><img src="markbook-section-link.png"/></a></h6><p><code>curl -u admin:admin-password -i -k https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/v1/version</code></p><h6><a id="Response">Response</a> <a href="#Response"><img src="markbook-section-link.png"/></a></h6>
 <pre><code>    {
        &quot;hash&quot;:&quot;{version-hash}&quot;,
        &quot;version&quot;:&quot;0.5.0&quot;
     }
-</code></pre><h5><a id="Topology+Collection"></a>Topology Collection</h5><h6><a id="Description"></a>Description</h6><p>Calls to Knox and return an array of JSON objects that represent the list of deployed topologies currently inside of the gateway. </p><h6><a id="Example+Request+URL"></a>Example Request URL</h6><p><code>https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/{api-version}/topologies</code> </p><h6><a id="Example+cURL+Request"></a>Example cURL Request</h6><p><code>curl -u admin:admin-password -i -k https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/v1/topologies</code></p><h6><a id="Response"></a>Response</h6>
+</code></pre><h5><a id="Topology+Collection">Topology Collection</a> <a href="#Topology+Collection"><img src="markbook-section-link.png"/></a></h5><h6><a id="Description">Description</a> <a href="#Description"><img src="markbook-section-link.png"/></a></h6><p>Calls to Knox and return an array of JSON objects that represent the list of deployed topologies currently inside of the gateway. </p><h6><a id="Example+Request+URL">Example Request URL</a> <a href="#Example+Request+URL"><img src="markbook-section-link.png"/></a></h6><p><code>https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/{api-version}/topologies</code> </p><h6><a id="Example+cURL+Request">Example cURL Request</a> <a href="#Example+cURL+Request"><img src="markbook-section-link.png"/></a></h6><p><code>curl -u admin:admin-password -i -k https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/v1/topologies</code></p><h6><a id="Response">Response</a> <a href="#Response"><img src="markbook-section-link.png"/><
 /a></h6>
 <pre><code>[  
     {  
        &quot;href&quot;:&quot;https://localhost:8443/gateway/admin/api/v1/topologies/_default&quot;,
@@ -604,7 +604,7 @@ ip-10-39-107-209.ec2.internal
        &quot;uri&quot;:&quot;https://localhost:8443/gateway/admin&quot;
     }
 ]  
-</code></pre><h5><a id="Topology"></a>Topology</h5><h6><a id="Description"></a>Description</h6><p>Calls to Knox and return a JSON object that represents the requested topology </p><h6><a id="Example+Request+URL"></a>Example Request URL</h6><p><code>https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/v1/topologies/{topology-name}</code> </p><h6><a id="Example+cURL+Request"></a>Example cURL Request</h6><p><code>curl -u admin:admin-password -i -k https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/v1/topologies/{topology-name}</code></p><h6><a id="Response"></a>Response</h6>
+</code></pre><h5><a id="Topology">Topology</a> <a href="#Topology"><img src="markbook-section-link.png"/></a></h5><h6><a id="Description">Description</a> <a href="#Description"><img src="markbook-section-link.png"/></a></h6><p>Calls to Knox and return a JSON object that represents the requested topology </p><h6><a id="Example+Request+URL">Example Request URL</a> <a href="#Example+Request+URL"><img src="markbook-section-link.png"/></a></h6><p><code>https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/v1/topologies/{topology-name}</code> </p><h6><a id="Example+cURL+Request">Example cURL Request</a> <a href="#Example+cURL+Request"><img src="markbook-section-link.png"/></a></h6><p><code>curl -u admin:admin-password -i -k https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/v1/topologies/{topology-name}</code></p><h6><a id="Response">Response</a> <a href="#Response"><img src="markbook-section-link.png"/></a></h6>
 <pre><code>{
     &quot;name&quot;: &quot;admin&quot;,
     &quot;providers&quot;: [{
@@ -648,7 +648,7 @@ ip-10-39-107-209.ec2.internal
     &quot;timestamp&quot;: 1406672646000,
     &quot;uri&quot;: &quot;https://localhost:8443/gateway/admin&quot;
 }
-</code></pre><h3><a id="Authentication"></a>Authentication</h3><p>There are two types of providers supported in Knox for establishing a user&rsquo;s identity:</p>
+</code></pre><h3><a id="Authentication">Authentication</a> <a href="#Authentication"><img src="markbook-section-link.png"/></a></h3><p>There are two types of providers supported in Knox for establishing a user&rsquo;s identity:</p>
 <ol>
   <li>Authentication Providers</li>
   <li>Federation Providers</li>
@@ -658,7 +658,7 @@ ip-10-39-107-209.ec2.internal
   <li>Specific configuration for the bundled BASIC/LDAP configuration</li>
   <li>Some tips into what may need to be customized for your environment</li>
   <li>How to setup the use of LDAP over SSL or LDAPS</li>
-</ol><h4><a id="General+Configuration+for+Shiro+Provider"></a>General Configuration for Shiro Provider</h4><p>As is described in the configuration section of this document, providers have a name-value based configuration - as is the common pattern in the rest of Hadoop.</p><p>The following example shows the format of the configuration for a given provider:</p>
+</ol><h4><a id="General+Configuration+for+Shiro+Provider">General Configuration for Shiro Provider</a> <a href="#General+Configuration+for+Shiro+Provider"><img src="markbook-section-link.png"/></a></h4><p>As is described in the configuration section of this document, providers have a name-value based configuration - as is the common pattern in the rest of Hadoop.</p><p>The following example shows the format of the configuration for a given provider:</p>
 <pre><code>&lt;provider&gt;
     &lt;role&gt;authentication&lt;/role&gt;
     &lt;name&gt;ShiroProvider&lt;/name&gt;
@@ -703,12 +703,12 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
             &lt;value&gt;authcBasic&lt;/value&gt;
         &lt;/param&gt;
     &lt;/provider&gt;

[... 1103 lines stripped ...]
Added: knox/site/books/knox-0-5-0/markbook-section-link.png
URL: http://svn.apache.org/viewvc/knox/site/books/knox-0-5-0/markbook-section-link.png?rev=1710635&view=auto
==============================================================================
Binary file - no diff available.

Propchange: knox/site/books/knox-0-5-0/markbook-section-link.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Modified: knox/site/books/knox-0-5-0/runtime-overview.png
URL: http://svn.apache.org/viewvc/knox/site/books/knox-0-5-0/runtime-overview.png?rev=1710635&r1=1710634&r2=1710635&view=diff
==============================================================================
Binary files - no diff available.

Modified: knox/site/books/knox-0-5-0/runtime-request-processing.png
URL: http://svn.apache.org/viewvc/knox/site/books/knox-0-5-0/runtime-request-processing.png?rev=1710635&r1=1710634&r2=1710635&view=diff
==============================================================================
Binary files - no diff available.

Modified: knox/site/books/knox-0-6-0/deployment-overview.png
URL: http://svn.apache.org/viewvc/knox/site/books/knox-0-6-0/deployment-overview.png?rev=1710635&r1=1710634&r2=1710635&view=diff
==============================================================================
Binary files - no diff available.

Modified: knox/site/books/knox-0-6-0/deployment-provider.png
URL: http://svn.apache.org/viewvc/knox/site/books/knox-0-6-0/deployment-provider.png?rev=1710635&r1=1710634&r2=1710635&view=diff
==============================================================================
Binary files - no diff available.

Modified: knox/site/books/knox-0-6-0/deployment-service.png
URL: http://svn.apache.org/viewvc/knox/site/books/knox-0-6-0/deployment-service.png?rev=1710635&r1=1710634&r2=1710635&view=diff
==============================================================================
Binary files - no diff available.