You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@knox.apache.org by km...@apache.org on 2013/09/27 00:01:14 UTC

svn commit: r1526721 - in /incubator/knox: site/books/knox-incubating-0-3-0/knox-incubating-0-3-0.html trunk/books/0.3.0/book.md trunk/books/0.3.0/book_trouble-shooting.md trunk/books/0.3.0/book_troubleshooting.md

Author: kminder
Date: Thu Sep 26 22:01:13 2013
New Revision: 1526721

URL: http://svn.apache.org/r1526721
Log:
Change Trouble Shooting to Troubleshooting.

Added:
    incubator/knox/trunk/books/0.3.0/book_troubleshooting.md
      - copied, changed from r1526719, incubator/knox/trunk/books/0.3.0/book_trouble-shooting.md
Removed:
    incubator/knox/trunk/books/0.3.0/book_trouble-shooting.md
Modified:
    incubator/knox/site/books/knox-incubating-0-3-0/knox-incubating-0-3-0.html
    incubator/knox/trunk/books/0.3.0/book.md

Modified: incubator/knox/site/books/knox-incubating-0-3-0/knox-incubating-0-3-0.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/books/knox-incubating-0-3-0/knox-incubating-0-3-0.html?rev=1526721&r1=1526720&r2=1526721&view=diff
==============================================================================
--- incubator/knox/site/books/knox-incubating-0-3-0/knox-incubating-0-3-0.html (original)
+++ incubator/knox/site/books/knox-incubating-0-3-0/knox-incubating-0-3-0.html Thu Sep 26 22:01:13 2013
@@ -45,25 +45,33 @@
     <li><a href="#HBase">HBase/Starbase</a></li>
     <li><a href="#Hive">Hive</a></li>
   </ul></li>
-  <li><a href="#Trouble+Shooting">Trouble Shooting</a></li>
+  <li><a href="#Troubleshooting">Troubleshooting</a></li>
   <li><a href="#Export+Controls">Export Controls</a></li>
 </ul><h2><a id="Introduction"></a>Introduction</h2><p>The Apache Knox Gateway is a system that provides a single point of authentication and access for Apache Hadoop services in a cluster. The goal is to simplify Hadoop security for both users (i.e. who access the cluster data and execute jobs) and operators (i.e. who control access and manage the cluster). The gateway runs as a server (or cluster of servers) that provide centralized access to one or more Hadoop clusters. In general the goals of the gateway are as follows:</p>
 <ul>
-  <li>Provide perimeter security for Hadoop REST APIs to make Hadoop security setup easier</li>
-  <li>Support authentication and token verification security scenarios</li>
-  <li>Deliver users a single URL end-point that aggregates capabilities for data and jobs</li>
-  <li>Enable integration with enterprise and cloud identity management environments</li>
+  <li>Provide perimeter security for Hadoop REST APIs to make Hadoop security easier to setup and use
+  <ul>
+    <li>Provide authentication and token verification at the perimeter</li>
+    <li>Enable authentication integration with enterprise and cloud identity management systems</li>
+    <li>Provide service level authorization at the perimeter</li>
+  </ul></li>
+  <li>Expose a single URL hierarchy that aggregates REST APIs of a Hadoop cluster
+  <ul>
+    <li>Limit the network endpoints (and therefore firewall holes) required to access a Hadoop cluster</li>
+    <li>Hide the internal Hadoop cluster topology from potential attackers</li>
+  </ul></li>
 </ul><h2><a id="Getting+Started"></a>Getting Started</h2><p>This section provides everything you need to know to get the gateway up and running against a Sandbox VM Hadoop cluster.</p><h3><a id="Requirements"></a>Requirements</h3><h4><a id="Java"></a>Java</h4><p>Java 1.6 or later is required for the Knox Gateway runtime. Use the command below to check the version of Java installed on the system where Knox will be running.</p>
 <pre><code>java -version
-</code></pre><h4><a id="Hadoop"></a>Hadoop</h4><p>An an existing Hadoop 1.x or 2.x cluster is required for Knox to protect. One of the easiest ways to ensure this it to utilize a Hortonworks Sandbox VM. It is possible to use a Hadoop cluster deployed on EC2 but this will require additional configuration not covered here. It is also possible to use a limited set of services in Hadoop cluster secured with Kerberos. This too required additional configuration that is not described here.</p><p>The Hadoop cluster should be ensured to have at least WebHDFS, WebHCat (i.e. Templeton) and Oozie configured, deployed and running. HBase/Stargate and Hive can also be accessed via the Knox Gateway given the proper versions and configuration.</p><p>The instructions that follow assume a few things:</p>
+</code></pre><h4><a id="Hadoop"></a>Hadoop</h4><p>An an existing Hadoop 1.x or 2.x cluster is required for Knox sit in front of and protect. One of the easiest ways to ensure this it to utilize a Hortonworks Sandbox VM. It is possible to use a Hadoop cluster deployed on EC2 but this will require additional configuration not covered here. It is also possible to use a limited set of services in Hadoop cluster secured with Kerberos. This too required additional configuration that is not described here. See the <a href="#Supported+Services">table provided</a> for details on what is supported for this release.</p><p>The Hadoop cluster should be ensured to have at least WebHDFS, WebHCat (i.e. Templeton) and Oozie configured, deployed and running. HBase/Stargate and Hive can also be accessed via the Knox Gateway given the proper versions and configuration.</p><p>The instructions that follow assume a few things:</p>
 <ol>
-  <li>The gateway is <em>not</em> collocated with the Hadoop clusters themselves</li>
+  <li>The gateway is <em>not</em> collocated with the Hadoop clusters themselves.</li>
   <li>The host names and IP addresses of the cluster services are accessible by the gateway where ever it happens to be running.</li>
-</ol><p>All of the instructions and samples provided here are tailored and tested to work &ldquo;out of the box&rdquo; against a <a href="http://hortonworks.com/products/hortonworks-sandbox">Hortonworks Sandbox 2.x VM</a>.</p><h3><a id="Download"></a>Download</h3><p>Download and extract the knox-{VERSION}.zip file into the installation directory. This directory will be referred to as your <code>{GATEWAY_HOME}</code>. You can find the downloads for Knox releases on the <a href="http://www.apache.org/dyn/closer.cgi/incubator/knox">Apache mirrors</a>.</p>
+</ol><p>All of the instructions and samples provided here are tailored and tested to work &ldquo;out of the box&rdquo; against a <a href="http://hortonworks.com/products/hortonworks-sandbox">Hortonworks Sandbox 2.x VM</a>.</p><h3><a id="Download"></a>Download</h3><p>Download one of the distributions below from the <a href="http://www.apache.org/dyn/closer.cgi/incubator/knox">Apache mirrors</a>.</p>
 <ul>
   <li>Source archive: <a href="http://www.apache.org/dyn/closer.cgi/incubator/knox/0.3.0/knox-incubating-0.3.0-src.zip">knox-incubating-0.3.0-src.zip</a> (<a href="http://www.apache.org/dist/incubator/knox/0.3.0/knox-0.3.0-incubating-src.zip.asc">PGP signature</a>, <a href="http://www.apache.org/dist/incubator/knox/0.3.0/knox-incubating-0.3.0-src.zip.sha">SHA1 digest</a>, <a href="http://www.apache.org/dist/incubator/knox/0.3.0/knox-incubating-0.3.0-src.zip.md5">MD5 digest</a>)</li>
   <li>Binary archive: <a href="http://www.apache.org/dyn/closer.cgi/incubator/knox/0.3.0/knox-incubating-0.3.0.zip">knox-incubating-0.3.0.zip</a> (<a href="http://www.apache.org/dist/incubator/knox/0.3.0/knox-incubating-0.3.0.zip.asc">PGP signature</a>, <a href="http://www.apache.org/dist/incubator/knox/0.3.0/knox-incubating-0.3.0.zip.sha">SHA1 digest</a>, <a href="http://www.apache.org/dist/incubator/knox/0.3.0/knox-incubating-0.3.0.zip.md5">MD5 digest</a>)</li>
-</ul><p>Apache Knox Gateway releases are available under the <a href="http://www.apache.org/licenses/LICENSE-2.0">Apache License, Version 2.0</a>. See the NOTICE file contained in each release artifact for applicable copyright attribution notices.</p><h2>{{Verify}}</h2><p>It is essential that you verify the integrity of the downloaded files using the PGP signatures. Please read Verifying Apache HTTP Server Releases for more information on why you should verify our releases.</p><p>The PGP signatures can be verified using PGP or GPG. First download the KEYS file as well as the .asc signature files for the relevant release packages. Make sure you get these files from the main distribution directory, rather than from a mirror. Then verify the signatures using one of the methods below.</p>
+  <li>RPM package: <a href="http://www.apache.org/dyn/closer.cgi/incubator/knox/0.3.0/knox-incubating-0.3.0.rpm">knox-incubating-0.3.0.rpm</a> (<a href="http://www.apache.org/dist/incubator/knox/0.3.0/knox-0.3.0-incubating.rpm.asc">PGP signature</a>, <a href="http://www.apache.org/dist/incubator/knox/0.3.0/knox-incubating-0.3.0.rpm.sha">SHA1 digest</a>, <a href="http://www.apache.org/dist/incubator/knox/0.3.0/knox-incubating-0.3.0.rpm.md5">MD5 digest</a>)</li>
+</ul><p>Apache Knox Gateway releases are available under the <a href="http://www.apache.org/licenses/LICENSE-2.0">Apache License, Version 2.0</a>. See the NOTICE file contained in each release artifact for applicable copyright attribution notices.</p><h3><a id="Verify"></a>Verify</h3><p>It is essential that you verify the integrity of any downloaded files using the PGP signatures. Please read <a href="http://httpd.apache.org/dev/verification.html">Verifying Apache HTTP Server Releases</a> for more information on why you should verify our releases.</p><p>The PGP signatures can be verified using PGP or GPG. First download the KEYS file as well as the .asc signature files for the relevant release packages. Make sure you get these files from the main distribution directory linked above, rather than from a mirror. Then verify the signatures using one of the methods below.</p>
 <pre><code>% pgpk -a KEYS
 % pgpv knox-incubating-0.3.0.zip.asc
 </code></pre><p>or</p>
@@ -72,9 +80,79 @@
 </code></pre><p>or</p>
 <pre><code>% gpg --import KEYS
 % gpg --verify knox-incubating-0.3.0.zip.asc
-</code></pre><h3><a id="Install"></a>Install</h3><h4><a id="ZIP"></a>ZIP</h4><p>Download and extract the <code>knox-{VERSION}.zip</code> file into the installation directory that will contain your <code>{GATEWAY_HOME}</code>. You can find the downloads for Knox releases on the <a href="http://www.apache.org/dyn/closer.cgi/incubator/knox">Apache mirrors</a>.</p>
-<pre><code>jar xf knox-{VERSION}.zip
-</code></pre><p>This will create a directory <code>knox-{VERSION}</code> in your current directory.</p><h4><a id="RPM"></a>RPM</h4><p>TODO</p><h4><a id="Layout"></a>Layout</h4><p>TODO - Describe the purpose of all of the directories</p><h3><a id="Supported+Services"></a>Supported Services</h3><p>This table enumerates the versions of various Hadoop services that have been tested to work with the Knox Gateway. Only more recent versions of some Hadoop components when secured via Kerberos can be accessed via the Knox Gateway.</p>
+</code></pre><h3><a id="Install"></a>Install</h3><p>The steps required to install the gateway will vary depending upon which distribution format was downloaded. In either case you will end up with a directory where the gateway is installed. This directory will be referred to as your <code>{GATEWAY_HOME}</code> throughout this document.</p><h4><a id="ZIP"></a>ZIP</h4><p>If you downloaded the Zip distribution you can simply extract the contents into a directory. The example below provides a command that can be executed to do this. Note the <code>{VERSION}</code> portion of the command must be replaced with an actual Apache Knox Gateway version number. This might be 0.3.0 for example and must patch the value in the file downloaded.</p>
+<pre><code>jar xf knox-incubating-{VERSION}.zip
+</code></pre><p>This will create a directory <code>knox-incubating-{VERSION}</code> in your current directory. The directory <code>knox-incubating-{VERSION}</code> will considered your <code>{GATEWAY_HOME}</code></p><h4><a id="RPM"></a>RPM</h4><p>TODO</p><h4><a id="Layout"></a>Layout</h4><p>Regardless of the installation method used the layout and content of the <code>{GATEWAY_HOME}</code> will be identical. The table below provides a brief explanation of the important files and directories within <code>{GATEWWAY_HOME}</code></p>
+<table>
+  <thead>
+    <tr>
+      <th>Directory </th>
+      <th>Purpose </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>conf/ </td>
+      <td>Contains configuration files that apply to the gateway globally (i.e. not cluster specific ). </td>
+    </tr>
+    <tr>
+      <td>bin/ </td>
+      <td>Contains the executable shell scripts, batch files and JARs for clients and servers. </td>
+    </tr>
+    <tr>
+      <td>deployments/ </td>
+      <td>Contains topology descriptors used to configure the gateway for specific Hadoop clusters. </td>
+    </tr>
+    <tr>
+      <td>lib/ </td>
+      <td>Contains the JARs for all the components that make up the gateway. </td>
+    </tr>
+    <tr>
+      <td>dep/ </td>
+      <td>Contains the JARs for all of the components upon which the gateway depends. </td>
+    </tr>
+    <tr>
+      <td>ext/ </td>
+      <td>A directory where user supplied extension JARs can be placed to extends the gateways functionality. </td>
+    </tr>
+    <tr>
+      <td>samples/ </td>
+      <td>Contains a number of samples that can be used to explore the functionality of the gateway. </td>
+    </tr>
+    <tr>
+      <td>templates/ </td>
+      <td>Contains default configuration files that can be copied and customized. </td>
+    </tr>
+    <tr>
+      <td>README </td>
+      <td>Provides basic information about the Apache Knox Gateway. </td>
+    </tr>
+    <tr>
+      <td>ISSUES </td>
+      <td>Describes significant know issues. </td>
+    </tr>
+    <tr>
+      <td>CHANGES </td>
+      <td>Enumerates the changes between releases. </td>
+    </tr>
+    <tr>
+      <td>INSTALL </td>
+      <td>Provides simple installation instructions. </td>
+    </tr>
+    <tr>
+      <td>LICENSE </td>
+      <td>Documents the license under which this software is provided. </td>
+    </tr>
+    <tr>
+      <td>NOTICE </td>
+      <td>Documents required attribution notices for included dependencies. </td>
+    </tr>
+    <tr>
+      <td>DISCLAIMER </td>
+      <td>Documents that this release is from a project undergoing incubation at Apache. </td>
+    </tr>
+  </tbody>
+</table><h3><a id="Supported+Services"></a>Supported Services</h3><p>This table enumerates the versions of various Hadoop services that have been tested to work with the Knox Gateway. Only more recent versions of some Hadoop components when secured via Kerberos can be accessed via the Knox Gateway.</p>
 <table>
   <thead>
     <tr>
@@ -89,19 +167,19 @@
       <td>WebHDFS </td>
       <td>2.1.0 </td>
       <td><img src="check.png"  alt="y"/> </td>
-      <td><img src="question.png"  alt="?"/><img src="check.png"  alt="y"/> </td>
+      <td><img src="check.png"  alt="y"/> </td>
     </tr>
     <tr>
       <td>WebHCat/Templeton </td>
       <td>0.11.0 </td>
       <td><img src="check.png"  alt="y"/> </td>
-      <td><img src="question.png"  alt="?"/><img src="error.png"  alt="n"/> </td>
+      <td><img src="check.png"  alt="y"/> </td>
     </tr>
     <tr>
       <td>Ozzie </td>
       <td>4.0.0 </td>
       <td><img src="check.png"  alt="y"/> </td>
-      <td><img src="question.png"  alt="?"/> </td>
+      <td><img src="check.png"  alt="y"/> </td>
     </tr>
     <tr>
       <td>HBase/Stargate </td>
@@ -110,6 +188,18 @@
       <td><img src="question.png"  alt="?"/> </td>
     </tr>
     <tr>
+      <td>Hive/WebHCat </td>
+      <td>0.11.0 </td>
+      <td><img src="check.png"  alt="y"/> </td>
+      <td><img src="check.png"  alt="y"/> </td>
+    </tr>
+    <tr>
+      <td> </td>
+      <td>0.12.0 </td>
+      <td><img src="check.png"  alt="y"/> </td>
+      <td><img src="check.png"  alt="y"/> </td>
+    </tr>
+    <tr>
       <td>Hive/JDBC </td>
       <td>0.11.0 </td>
       <td><img src="error.png"  alt="n"/> </td>
@@ -128,16 +218,16 @@
       <td><img src="question.png"  alt="?"/> </td>
     </tr>
   </tbody>
-</table><p>ProxyUser feature of WebHDFS, WebHCat and Oozie required for secure cluster support seem to work fine. Knox code seems to be broken for support of secure cluster at this time for WebHDFS, WebHCat and Oozie.</p><h3><a id="Basic+Usage"></a>Basic Usage</h3><h4><a id="Starting+Servers"></a>Starting Servers</h4><h5><a id="1.+Enter+the+`{GATEWAY_HOME}`+directory"></a>1. Enter the <code>{GATEWAY_HOME}</code> directory</h5>
-<pre><code>cd knox-{VERSION}
-</code></pre><p>The fully qualified name of this directory will be referenced as `{GATEWAY_HOME}}} throughout the remainder of this document.</p><h5><a id="2.+Start+the+demo+LDAP+server+(ApacheDS)"></a>2. Start the demo LDAP server (ApacheDS)</h5><p>First, understand that the LDAP server provided here is for demonstration purposes. You may configure the LDAP specifics within the topology descriptor for the cluster as described in step 5 below, in order to customize what LDAP instance to use. The assumption is that most users will leverage the demo LDAP server while evaluating this release and should therefore continue with the instructions here in step 3.</p><p>Edit <code>{GATEWAY_HOME}/conf/users.ldif</code> if required and add your users and groups to the file. A sample end user &ldquo;bob&rdquo; has been already included. Note that the passwords in this file are &ldquo;fictitious&rdquo; and have nothing to do with the actual accounts on the Hadoop cluster you are using. There is 
 also a copy of this file in the templates directory that you can use to start over if necessary.</p><p>Start the LDAP server - pointing it to the config dir where it will find the users.ldif file in the conf directory.</p>
+</table><h3><a id="Basic+Usage"></a>Basic Usage</h3><p>The steps described below are intended to get the Knox Gateway server up and running in its default configuration. Once that is accomplished a very simple example of using the gateway to interact with a Hadoop cluster is provided. More detailed configuration information is provided in the <a href="#Gateway+Details">Gateway Details</a> section. More detailed examples for using each Hadoop service can be found in the <a href="#Service+Details">Service Details</a> section.</p><p>Note that *nix conventions are used throughout this section but in general the Windows alternative should be obvious. In situations where this is not the case a Windows alternative will be provided.</p><h4><a id="Starting+Servers"></a>Starting Servers</h4><h5><a id="1.+Enter+the+`{GATEWAY_HOME}`+directory"></a>1. Enter the <code>{GATEWAY_HOME}</code> directory</h5>
+<pre><code>cd knox-incubation-{VERSION}
+</code></pre><p>The fully qualified name of this directory will be referenced as <code>{GATEWAY_HOME}</code> throughout this document.</p><h5><a id="2.+Start+the+demo+LDAP+server+(ApacheDS)"></a>2. Start the demo LDAP server (ApacheDS)</h5><p>First, understand that the LDAP server provided here is for demonstration purposes. You may configure the gateway to utilize other LDAP systems via the topology descriptor. This is described in step 5 below. The assumption is that most users will leverage the demo LDAP server while evaluating this release and should therefore continue with the instructions here in step 3.</p><p>Edit <code>{GATEWAY_HOME}/conf/users.ldif</code> if required and add any desired users and groups to the file. A sample end user &ldquo;guest&rdquo; has been already included. Note that the passwords in this file are &ldquo;fictitious&rdquo; and have nothing to do with the actual accounts on the Hadoop cluster you are using. There is also a copy of this file in the templ
 ates directory that you can use to start over if necessary. This file is only used by the demo LDAP server.</p><p>Start the LDAP server - pointing it to the config dir where it will find the <code>users.ldif</code> file in the conf directory.</p>
 <pre><code>java -jar bin/ldap.jar conf &amp;
-</code></pre><p>There are a number of log messages of the form {{Created null.` that can safely be ignored. Take note of the port on which it was started as this needs to match later configuration.</p><h5><a id="3.+Start+the+gateway+server"></a>3. Start the gateway server</h5>
+</code></pre><p>_On windows this command can be run in its own command windows instead of running it in the background via <code>&amp;</code>.</p><p>There are a number of log messages of the form <code>Created null.</code> that can safely be ignored. Take note of the port on which it was started as this needs to match later configuration.</p><h5><a id="3.+Start+the+gateway+server"></a>3. Start the gateway server</h5>
 <pre><code>java -jar bin/server.jar
-</code></pre><p>Take note of the port identified in the logging output as you will need this for accessing the gateway.</p><p>The server will prompt you for the master secret (password). This secret is used to secure artifacts used to secure artifacts used by the gateway server for things like SSL, credential/password aliasing. This secret will have to be entered at startup unless you choose to persist it. Remember this secret and keep it safe. It represents the keys to the kingdom. See the Persisting the Master section for more information.</p><h5><a id="4.+Configure+the+Gateway+with+the+topology+of+your+Hadoop+cluster"></a>4. Configure the Gateway with the topology of your Hadoop cluster</h5><p>Edit the file <code>{GATEWAY_HOME}/deployments/sandbox.xml</code></p><p>Change the host and port in the urls of the <code>&lt;service&gt;</code> elements for WEBHDFS, WEBHCAT, OOZIE, WEBHBASE and HIVE services to match your Hadoop cluster deployment.</p><p>The default configuration contains
  the LDAP URL for a LDAP server. By default that file is configured to access the demo ApacheDS based LDAP server and its default configuration. By default, this server listens on port 33389. Optionally, you can change the LDAP URL for the LDAP server to be used for authentication. This is set via the main.ldapRealm.contextFactory.url property in the <code>&lt;gateway&gt;&lt;provider&gt;&lt;authentication&gt;</code> section.</p><p>Save the file. The directory <code>{GATEWAY_HOME}/deployments</code> is monitored by the gateway server. When a new or changed cluster topology descriptor is detected, it will provision the endpoints for the services described in the topology descriptor. Note that the name of the file excluding the extension is also used as the path for that cluster in the URL. For example the <code>sandbox.xml</code> file will result in gateway URLs of the form <code>http://{gateway-host}:{gateway-port}/gateway/sandbox/webhdfs</code>.</p><h5><a id="5.+Test+the+installatio
 n+and+configuration+of+your+Gateway"></a>5. Test the installation and configuration of your Gateway</h5><p>Invoke the LISTSATUS operation on HDFS represented by your configured NAMENODE by using your web browser or curl:</p>
+</code></pre><p>Take note of the port identified in the logging output as you will need this for accessing the gateway.</p><p>The server will prompt you for the master secret (i.e. password). This secret is used to secure artifacts used by the gateway server for things like SSL and credential/password aliasing. This secret will have to be entered at startup unless you choose to persist it. See the Persisting the Master section for more information. Remember this secret and keep it safe. It represents the keys to the kingdom.</p><h5><a id="4.+Configure+the+Gateway+with+the+topology+of+your+Hadoop+cluster"></a>4. Configure the Gateway with the topology of your Hadoop cluster</h5><p>Edit the file <code>{GATEWAY_HOME}/deployments/sandbox.xml</code></p><p>Change the host and port in the urls of the <code>&lt;service&gt;</code> elements for WEBHDFS, WEBHCAT, OOZIE, WEBHBASE and HIVE services to match your Hadoop cluster deployment.</p><p>The default configuration contains the LDAP URL for
  a LDAP server. By default that file is configured to access the demo ApacheDS based LDAP server and its default configuration. The ApacheDS based LDAP server listens on port 33389 by default. Optionally, you can change the LDAP URL for the LDAP server to be used for authentication. This is set via the <code>main.ldapRealm.contextFactory.url</code> property in the <code>&lt;gateway&gt;&lt;provider&gt;&lt;authentication&gt;</code> section. If you use an LDAP system other than the demo LDAP server you may need to change additional configuration as well.</p><p>Save the file. The directory <code>{GATEWAY_HOME}/deployments</code> is monitored by the gateway server. When a new or changed cluster topology descriptor is detected, it will provision the endpoints for the services described in the topology descriptor. Note that the name of the file excluding the extension is also used as the path for that cluster in the URL. For example the <code>sandbox.xml</code> file will result in gateway 
 URLs of the form <code>http://{gateway-host}:{gateway-port}/gateway/sandbox/webhdfs</code>.</p><h5><a id="5.+Test+the+installation"></a>5. Test the installation</h5><p>Invoke the LISTSATUS operation on WebHDFS via the gateway. This will return a directory listing of the root (i.e. /) directory of HDFS.</p>
 <pre><code>curl -i -k -u bob:bob-password -X GET \
     &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/?op=LISTSTATUS&#39;
-</code></pre><p>The results of the above command should result in something to along the lines of the output below. The exact information returned is subject to the content within HDFS in your Hadoop cluster.</p>
+</code></pre><p>The results of the above command should result in something to along the lines of the output below. The exact information returned is subject to the content within HDFS in your Hadoop cluster. Successfully executing this command at a minimum proves that the gateway is properly configured to provide access to WebHDFS. It does not necessarily provide that any of the other services are correct configured to be accessible. To validate that see the sections for the individual services in <a href="#Service+Details">Service Details</a></p>
 <pre><code>HTTP/1.1 200 OK
 Content-Type: application/json
 Content-Length: 760
@@ -149,7 +239,7 @@ Server: Jetty(6.1.26)
 {&quot;accessTime&quot;:0,&quot;blockSize&quot;:0,&quot;group&quot;:&quot;hdfs&quot;,&quot;length&quot;:0,&quot;modificationTime&quot;:1350596040075,&quot;owner&quot;:&quot;hdfs&quot;,&quot;pathSuffix&quot;:&quot;tmp&quot;,&quot;permission&quot;:&quot;777&quot;,&quot;replication&quot;:0,&quot;type&quot;:&quot;DIRECTORY&quot;},
 {&quot;accessTime&quot;:0,&quot;blockSize&quot;:0,&quot;group&quot;:&quot;hdfs&quot;,&quot;length&quot;:0,&quot;modificationTime&quot;:1350595857178,&quot;owner&quot;:&quot;hdfs&quot;,&quot;pathSuffix&quot;:&quot;user&quot;,&quot;permission&quot;:&quot;755&quot;,&quot;replication&quot;:0,&quot;type&quot;:&quot;DIRECTORY&quot;}
 ]}}
-</code></pre><p>For additional information on WebHDFS, Templeton/WebHCat and Oozie REST APIs, see the following URLs respectively:</p>
+</code></pre><p>For additional information on WebHDFS, WebHCat/Templeton, Oozie and HBase/Stargate REST APIs, see the following URLs respectively:</p>
 <ul>
   <li>WebHDFS - <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html">http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html</a></li>
   <li>WebHCat (Templeton) - <a href="http://people.apache.org/~thejas/templeton_doc_v1">http://people.apache.org/~thejas/templeton_doc_v1</a></li>
@@ -162,14 +252,7 @@ Server: Jetty(6.1.26)
   <li><a href="#Oozie+Examples">Oozie</a></li>
   <li><a href="#HBase+Examples">HBase</a></li>
   <li><a href="#Hive+Examples">Hive</a></li>
-</ul><h2>{{Sandbox Configuration}}</h2><p>This version of the Apache Knox Gateway is tested against [Hortonworks Sandbox 2.x|sandbox]</p><p>Currently there is an issue with Sandbox that prevents it from being easily used with the gateway. In order to correct the issue, you can use the commands below to login to the Sandbox VM and modify the configuration. This assumes that the name sandbox is setup to resolve to the Sandbox VM. It may be necessary to use the IP address of the Sandbox VM instead. <em>This is frequently but not always <code>192.168.56.101</code>.</em></p>
-<pre><code>ssh root@sandbox
-cp /usr/lib/hadoop/conf/hdfs-site.xml /usr/lib/hadoop/conf/hdfs-site.xml.orig
-sed -e s/localhost/sandbox/ /usr/lib/hadoop/conf/hdfs-site.xml.orig &gt; /usr/lib/hadoop/conf/hdfs-site.xml
-shutdown -r now
-</code></pre><p>In addition to make it very easy to follow along with the samples for the gateway you can configure your local system to resolve the address of the Sandbox by the names <code>vm</code> and <code>sandbox</code>. The IP address that is shown below should be that of the Sandbox VM as it is known on your system. <em>This will likely, but not always, be <code>192.168.56.101</code>.</em></p><p>On Linux or Macintosh systems add a line like this to the end of the file <code>/etc/hosts</code> on your local machine, <em>not the Sandbox VM</em>. <em>Note: The character between the 192.168.56.101 and vm below is a <em>tab</em> character.</em></p>
-<pre><code>192.168.56.101  vm sandbox
-</code></pre><p>On Windows systems a similar but different mechanism can be used. On recent versions of windows the file that should be modified is <code>%systemroot%\system32\drivers\etc\hosts</code></p><h2>{{Gateway Details}}</h2><p>TODO</p><h3><a id="Mapping+Gateway+URLs+to+Hadoop+cluster+URLs"></a>Mapping Gateway URLs to Hadoop cluster URLs</h3><p>The Gateway functions much like a reverse proxy. As such it maintains a mapping of URLs that are exposed externally by the gateway to URLs that are provided by the Hadoop cluster. Examples of mappings for the WebHDFS, WebHCat, Oozie and Stargate/Hive are shown below. These mapping are generated from the combination of the gateway configuration file (i.e. <code>{GATEWAY_HOME}/conf/gateway-site.xml</code>) and the cluster topology descriptors (e.g. <code>{GATEWAY_HOME}/deployments/{cluster-name}.xml</code>).</p>
+</ul><h2><a id="Gateway+Details"></a>Gateway Details</h2><p>TODO</p><h3><a id="Mapping+Gateway+URLs+to+Hadoop+cluster+URLs"></a>Mapping Gateway URLs to Hadoop cluster URLs</h3><p>The Gateway functions much like a reverse proxy. As such it maintains a mapping of URLs that are exposed externally by the gateway to URLs that are provided by the Hadoop cluster. Examples of mappings for the WebHDFS, WebHCat, Oozie and Stargate/Hive are shown below. These mapping are generated from the combination of the gateway configuration file (i.e. <code>{GATEWAY_HOME}/conf/gateway-site.xml</code>) and the cluster topology descriptors (e.g. <code>{GATEWAY_HOME}/deployments/{cluster-name}.xml</code>).</p>
 <ul>
   <li>WebHDFS
   <ul>
@@ -191,7 +274,7 @@ shutdown -r now
     <li>Gateway: <code>https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/hbase</code></li>
     <li>Cluster: <code>http://{hbase-host}:60080</code></li>
   </ul></li>
-</ul><p>The values for <code>{gateway-host}</code>, <code>{gateway-port}</code>, <code>{gateway-path}</code> are provided via the Gateway configuration file (i.e. <code>{GATEWAY_HOME}/conf/gateway-site.xml</code>).</p><p>The value for <code>{cluster-name}</code> is derived from the name of the cluster topology descriptor (e.g. <code>{GATEWAY_HOME}/deployments/{cluster-name}.xml</code>).</p><p>The value for <code>{webhdfs-host}</code> and <code>{webhcat-host}</code> are provided via the cluster topology descriptor (e.g. <code>{GATEWAY_HOME}/deployments/{cluster-name}.xml</code>).</p><p>Note: The ports 50070, 50111, 11000 and 60080 are the defaults for WebHDFS, WebHCat, Oozie and Stargate/HBase respectively. Their values can also be provided via the cluster topology descriptor if your Hadoop cluster uses different ports.</p><h2>{{Configuration}}</h2><h3><a id="Host+Mapping"></a>Host Mapping</h3><p>TODO</p><p>That really depends upon how you have your VM configured. If you can hit <a h
 ref="http://c6401.ambari.apache.org:1022/">http://c6401.ambari.apache.org:1022/</a> directly from your client and knox host then you probably don&rsquo;t need the hostmap at all. The host map only exists for situations where a host in the hadoop cluster is known by one name externally and another internally. For example running hostname -q on sandbox returns sandbox.hortonworks.com but externally Sandbox is setup to be accesses using localhost via portmapping. The way the hostmap config works is that the <name/> element is what the hadoop cluster host is known as externally and the <value/> is how the hadoop cluster host identifies itself internally. <param><name>localhost</name><value>c6401,c6401.ambari.apache.org</value></param> You SHOULD be able to simply change <enabled>true</enabled> to false but I have a suspicion that that might not actually work. Please try it and file a jira if that doesn&rsquo;t work. If so, simply either remove the full provider config for hostmap or rem
 ove the <param/> that defines the mapping.</p><h3><a id="Logging"></a>Logging</h3><p>If necessary you can enable additional logging by editing the <code>log4j.properties</code> file in the <code>conf</code> directory. Changing the rootLogger value from <code>ERROR</code> to <code>DEBUG</code> will generate a large amount of debug logging. A number of useful, more fine loggers are also provided in the file.</p><h3><a id="Java+VM+Options"></a>Java VM Options</h3><p>TODO</p><h3><a id="Persisting+the+Master+Secret"></a>Persisting the Master Secret</h3><p>The master secret is required to start the server. This secret is used to access secured artifacts by the gateway instance. Keystore, trust stores and credential stores are all protected with the master secret.</p><p>You may persist the master secret by supplying the <em>-persist-master</em> switch at startup. This will result in a warning indicating that persisting the secret is less secure than providing it at startup. We do make some
  provisions in order to protect the persisted password.</p><p>It is encrypted with AES 128 bit encryption and where possible the file permissions are set to only be accessible by the user that the gateway is running as.</p><p>After persisting the secret, ensure that the file at config/security/master has the appropriate permissions set for your environment. This is probably the most important layer of defense for master secret. Do not assume that the encryption if sufficient protection.</p><p>A specific user should be created to run the gateway this will protect a persisted master file.</p><h3><a id="Management+of+Security+Artifacts"></a>Management of Security Artifacts</h3><p>There are a number of artifacts that are used by the gateway in ensuring the security of wire level communications, access to protected resources and the encryption of sensitive data. These artifacts can be managed from outside of the gateway instances or generated and populated by the gateway instance itself.
 </p><p>The following is a description of how this is coordinated with both standalone (development, demo, etc) gateway instances and instances as part of a cluster of gateways in mind.</p><p>Upon start of the gateway server we:</p>
+</ul><p>The values for <code>{gateway-host}</code>, <code>{gateway-port}</code>, <code>{gateway-path}</code> are provided via the Gateway configuration file (i.e. <code>{GATEWAY_HOME}/conf/gateway-site.xml</code>).</p><p>The value for <code>{cluster-name}</code> is derived from the name of the cluster topology descriptor (e.g. <code>{GATEWAY_HOME}/deployments/{cluster-name}.xml</code>).</p><p>The value for <code>{webhdfs-host}</code> and <code>{webhcat-host}</code> are provided via the cluster topology descriptor (e.g. <code>{GATEWAY_HOME}/deployments/{cluster-name}.xml</code>).</p><p>Note: The ports 50070, 50111, 11000 and 60080 are the defaults for WebHDFS, WebHCat, Oozie and Stargate/HBase respectively. Their values can also be provided via the cluster topology descriptor if your Hadoop cluster uses different ports.</p><h3><a id="Configuration"></a>Configuration</h3><h4><a id="Host+Mapping"></a>Host Mapping</h4><p>TODO - Complete Host Mapping docs.</p><p>That really depends upon 
 how you have your VM configured. If you can hit <a href="http://c6401.ambari.apache.org:1022/">http://c6401.ambari.apache.org:1022/</a> directly from your client and knox host then you probably don&rsquo;t need the hostmap at all. The host map only exists for situations where a host in the hadoop cluster is known by one name externally and another internally. For example running hostname -q on sandbox returns sandbox.hortonworks.com but externally Sandbox is setup to be accesses using localhost via portmapping. The way the hostmap config works is that the <name/> element is what the hadoop cluster host is known as externally and the <value/> is how the hadoop cluster host identifies itself internally. <param><name>localhost</name><value>c6401,c6401.ambari.apache.org</value></param> You SHOULD be able to simply change <enabled>true</enabled> to false but I have a suspicion that that might not actually work. Please try it and file a jira if that doesn&rsquo;t work. If so, simply eithe
 r remove the full provider config for hostmap or remove the <param/> that defines the mapping.</p><h4><a id="Logging"></a>Logging</h4><p>If necessary you can enable additional logging by editing the <code>log4j.properties</code> file in the <code>conf</code> directory. Changing the rootLogger value from <code>ERROR</code> to <code>DEBUG</code> will generate a large amount of debug logging. A number of useful, more fine loggers are also provided in the file.</p><h4><a id="Java+VM+Options"></a>Java VM Options</h4><p>TODO - Java VM options doc.</p><h4><a id="Persisting+the+Master+Secret"></a>Persisting the Master Secret</h4><p>The master secret is required to start the server. This secret is used to access secured artifacts by the gateway instance. Keystore, trust stores and credential stores are all protected with the master secret.</p><p>You may persist the master secret by supplying the <em>-persist-master</em> switch at startup. This will result in a warning indicating that persist
 ing the secret is less secure than providing it at startup. We do make some provisions in order to protect the persisted password.</p><p>It is encrypted with AES 128 bit encryption and where possible the file permissions are set to only be accessible by the user that the gateway is running as.</p><p>After persisting the secret, ensure that the file at config/security/master has the appropriate permissions set for your environment. This is probably the most important layer of defense for master secret. Do not assume that the encryption if sufficient protection.</p><p>A specific user should be created to run the gateway this will protect a persisted master file.</p><h4><a id="Management+of+Security+Artifacts"></a>Management of Security Artifacts</h4><p>There are a number of artifacts that are used by the gateway in ensuring the security of wire level communications, access to protected resources and the encryption of sensitive data. These artifacts can be managed from outside of the g
 ateway instances or generated and populated by the gateway instance itself.</p><p>The following is a description of how this is coordinated with both standalone (development, demo, etc) gateway instances and instances as part of a cluster of gateways in mind.</p><p>Upon start of the gateway server we:</p>
 <ol>
   <li>Look for an identity store at <code>conf/security/keystores/gateway.jks</code>.  The identity store contains the certificate and private key used to represent the identity of the server for SSL connections and signature creation.
   <ul>
@@ -300,7 +383,7 @@ shutdown -r now
 </code></pre><p>The above configuration enables the authorization provider but does not indicate any ACLs yet and therefore there is no restriction to accessing the Hadoop services. In order to indicate the resources to be protected and the specific users, groups or ip&rsquo;s to grant access, we need to provide parameters like the following:</p>
 <pre><code>&lt;param&gt;
     &lt;name&gt;{serviceName}.acl&lt;/name&gt;
-    &lt;value&gt;username[,*|username…];group[,*|group…];ipaddr[,*|ipaddr…]&lt;/value&gt;
+    &lt;value&gt;username[,*|username...];group[,*|group...];ipaddr[,*|ipaddr...]&lt;/value&gt;
 &lt;/param&gt;
 </code></pre><p>where <code>{serverName}</code> would need to be the name of a configured Hadoop service within the topology. Note that the configuration without any ACLs defined is equivalent to:</p>
 <pre><code>&lt;param&gt;
@@ -337,10 +420,10 @@ shutdown -r now
   <li>the user is &ldquo;hdfs&rdquo; or &ldquo;bob&rdquo; OR</li>
   <li>the user is in &ldquo;admin&rdquo; group OR</li>
   <li>the request is coming from 127.0.0.2 or 127.0.0.3</li>
-</ol><h4><a id="Other+Related+Configuration"></a>Other Related Configuration</h4><p>The principal mapping aspect of the identity assertion provider is important to understand in order to fully utilize the authorization features of this provider.</p><p>This feature allows us to map the authenticated principal to a runas or impersonated principal to be asserted to the Hadoop services in the backend. When a principal mapping is defined that results in an impersonated principal being created the impersonated principal is then the effective principal. If there is no mapping to another principal then the authenticated or primary principal is then the effective principal. Principal mapping has actually been available in the identity assertion provider from the beginning of Knox. Although hasn’t been adequately documented as of yet.</p>
+</ol><h4><a id="Other+Related+Configuration"></a>Other Related Configuration</h4><p>The principal mapping aspect of the identity assertion provider is important to understand in order to fully utilize the authorization features of this provider.</p><p>This feature allows us to map the authenticated principal to a runas or impersonated principal to be asserted to the Hadoop services in the backend. When a principal mapping is defined that results in an impersonated principal being created the impersonated principal is then the effective principal. If there is no mapping to another principal then the authenticated or primary principal is then the effective principal. Principal mapping has actually been available in the identity assertion provider from the beginning of Knox. Although hasn&rsquo;t been adequately documented as of yet.</p>
 <pre><code>&lt;param&gt;
     &lt;name&gt;principal.mapping&lt;/name&gt;
-    &lt;value&gt;{primaryPrincipal}[,…]={impersonatedPrincipal}[;…]&lt;/value&gt;
+    &lt;value&gt;{primaryPrincipal}[,...]={impersonatedPrincipal}[;...]&lt;/value&gt;
 &lt;/param&gt;
 </code></pre><p>For instance:</p>
 <pre><code>&lt;param&gt;
@@ -369,7 +452,7 @@ shutdown -r now
 </code></pre><p>this configuration defines a principal mapping between an incoming identity of &ldquo;bob&rdquo; to an impersonated principal of &ldquo;hdfs&rdquo;. For an authenticated request from bob the effective principal ends up being &ldquo;hdfs&rdquo;.</p><p>In addition, we allow the administrator to map groups to effective principals. This is done through another param within the identity assertion provider:</p>
 <pre><code>&lt;param&gt;
     &lt;name&gt;group.principal.mapping&lt;/name&gt;
-    &lt;value&gt;{userName[,*|userName…]}={groupName[,groupName…]}[,…]&lt;/value&gt;
+    &lt;value&gt;{userName[,*|userName...]}={groupName[,groupName...]}[,...]&lt;/value&gt;
 &lt;/param&gt;
 </code></pre><p>For instance:</p>
 <pre><code>&lt;param&gt;
@@ -471,13 +554,13 @@ shutdown -r now
         &lt;url&gt;http://localhost:10000/&lt;/url&gt;
     &lt;/service&gt;
 &lt;/topology&gt;
-</code></pre><h2>{{Secure Clusters}}</h2><p>If your Hadoop cluster is secured with Kerberos authentication, you have to do the following on Knox side.</p><h3><a id="Secure+the+Hadoop+Cluster"></a>Secure the Hadoop Cluster</h3><p>Please secure Hadoop services with Keberos authentication.</p><p>Please see instructions at [http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html#Configuration_in_Secure_Mode] and [http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.3.1/bk_installing_manually_book/content/rpm-chap14.html]</p><h3><a id="Create+Unix+account+for+Knox+on+Hadoop+master+nodes"></a>Create Unix account for Knox on Hadoop master nodes</h3>
-<pre><code>useradd \-g hadoop knox
-</code></pre><h3><a id="Create+Kerberos+principal,+keytab+for+Knox"></a>Create Kerberos principal, keytab for Knox</h3><p>One way of doing this, assuming your KDC realm is EXAMPLE.COM</p><p>ssh into your host running KDC</p>
+</code></pre><h3><a id="Secure+Clusters"></a>Secure Clusters</h3><p>If your Hadoop cluster is secured with Kerberos authentication, you have to do the following on Knox side.</p><p>Please secure Hadoop services with Keberos authentication.</p><p>Please see instructions at [http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html#Configuration_in_Secure_Mode] and [http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.3.1/bk_installing_manually_book/content/rpm-chap14.html]</p><h4><a id="Create+Unix+account+for+Knox+on+Hadoop+master+nodes"></a>Create Unix account for Knox on Hadoop master nodes</h4>
+<pre><code>useradd -g hadoop knox
+</code></pre><h4><a id="Create+Kerberos+principal,+keytab+for+Knox"></a>Create Kerberos principal, keytab for Knox</h4><p>One way of doing this, assuming your KDC realm is EXAMPLE.COM</p><p>ssh into your host running KDC</p>
 <pre><code>kadmin.local
 add_principal -randkey knox/knox@EXAMPLE.COM
 ktadd -norandkey -k /etc/security/keytabs/knox.service.keytab
-</code></pre><h3><a id="Grant+Proxy+privileges+for+Knox+user+in+`core-site.xml`+on+Hadoop+master+nodes"></a>Grant Proxy privileges for Knox user in <code>core-site.xml</code> on Hadoop master nodes</h3><p>Update <code>core-site.xml</code> and add the following lines towards the end of the file.</p><p>Please replace FQDN_OF_KNOX_HOST with right value in your cluster. You could use * for local developer testing if Knox host does not have static IP.</p>
+</code></pre><h4><a id="Grant+Proxy+privileges+for+Knox+user+in+`core-site.xml`+on+Hadoop+master+nodes"></a>Grant Proxy privileges for Knox user in <code>core-site.xml</code> on Hadoop master nodes</h4><p>Update <code>core-site.xml</code> and add the following lines towards the end of the file.</p><p>Please replace FQDN_OF_KNOX_HOST with right value in your cluster. You could use * for local developer testing if Knox host does not have static IP.</p>
 <pre><code>&lt;property&gt;
     &lt;name&gt;hadoop.proxyuser.knox.groups&lt;/name&gt;
     &lt;value&gt;users&lt;/value&gt;
@@ -486,7 +569,7 @@ ktadd -norandkey -k /etc/security/keytab
     &lt;name&gt;hadoop.proxyuser.knox.hosts&lt;/name&gt;
     &lt;value&gt;FQDN_OF_KNOX_HOST&lt;/value&gt;
 &lt;/property&gt;
-</code></pre><h3><a id="Grant+proxy+privilege+for+Knox+in+`oozie-stie.xml`+on+Oozie+host"></a>Grant proxy privilege for Knox in <code>oozie-stie.xml</code> on Oozie host</h3><p>Update <code>oozie-site.xml</code> and add the following lines towards the end of the file.</p><p>Please replace FQDN_OF_KNOX_HOST with right value in your cluster. You could use * for local developer testing if Knox host does not have static IP.</p>
+</code></pre><h4><a id="Grant+proxy+privilege+for+Knox+in+`oozie-stie.xml`+on+Oozie+host"></a>Grant proxy privilege for Knox in <code>oozie-stie.xml</code> on Oozie host</h4><p>Update <code>oozie-site.xml</code> and add the following lines towards the end of the file.</p><p>Please replace FQDN_OF_KNOX_HOST with right value in your cluster. You could use * for local developer testing if Knox host does not have static IP.</p>
 <pre><code>&lt;property&gt;
    &lt;name&gt;oozie.service.ProxyUserService.proxyuser.knox.groups&lt;/name&gt;
    &lt;value&gt;users&lt;/value&gt;
@@ -495,12 +578,12 @@ ktadd -norandkey -k /etc/security/keytab
    &lt;name&gt;oozie.service.ProxyUserService.proxyuser.knox.hosts&lt;/name&gt;
    &lt;value&gt;FQDN_OF_KNOX_HOST&lt;/value&gt;
 &lt;/property&gt;
-</code></pre><h3><a id="Copy+knox+keytab+to+Knox+host"></a>Copy knox keytab to Knox host</h3><p>Please add unix account for knox on Knox host</p>
+</code></pre><h4><a id="Copy+knox+keytab+to+Knox+host"></a>Copy knox keytab to Knox host</h4><p>Please add unix account for knox on Knox host</p>
 <pre><code>useradd -g hadoop knox
 </code></pre><p>Please copy knox.service.keytab created on KDC host on to your Knox host /etc/knox/conf/knox.service.keytab</p>
 <pre><code>chown knox knox.service.keytab
 chmod 400 knox.service.keytab
-</code></pre><h3><a id="Update+krb5.conf+at+/etc/knox/conf/krb5.conf+on+Knox+host"></a>Update krb5.conf at /etc/knox/conf/krb5.conf on Knox host</h3><p>You could copy the <code>templates/krb5.conf</code> file provided in the Knox binary download and customize it to suit your cluster.</p><h3><a id="Update+`krb5JAASLogin.conf`+at+`/etc/knox/conf/krb5JAASLogin.conf`+on+Knox+host"></a>Update <code>krb5JAASLogin.conf</code> at <code>/etc/knox/conf/krb5JAASLogin.conf</code> on Knox host</h3><p>You could copy the <code>templates/krb5JAASLogin.conf</code> file provided in the Knox binary download and customize it to suit your cluster.</p><h3><a id="Update+`gateway-site.xml`+on+Knox+host+on+Knox+host"></a>Update <code>gateway-site.xml</code> on Knox host on Knox host</h3><p>Update <code>conf/gateway-site.xml</code> in your Knox installation and set the value of <code>gateway.hadoop.kerberos.secured</code> to true.</p><h3><a id="Restart+Knox"></a>Restart Knox</h3><p>After you do the above con
 figurations and restart Knox, Knox would use SPNego to authenticate with Hadoop services and Oozie. There is not change in the way you make calls to Knox whether you use Curl or Knox DSL.</p><h2>{{Client Details}}</h2><p>Hadoop requires a client that can be used to interact remotely with the services provided by Hadoop cluster. This will also be true when using the Apache Knox Gateway to provide perimeter security and centralized access for these services. The two primary existing clients for Hadoop are the CLI (i.e. Command Line Interface, hadoop) and HUE (i.e. Hadoop User Environment). For several reasons however, neither of these clients can <em>currently</em> be used to access Hadoop services via the Apache Knox Gateway.</p><p>This led to thinking about a very simple client that could help people use and evaluate the gateway. The list below outlines the general requirements for such a client.</p>
+</code></pre><h4><a id="Update+krb5.conf+at+/etc/knox/conf/krb5.conf+on+Knox+host"></a>Update krb5.conf at /etc/knox/conf/krb5.conf on Knox host</h4><p>You could copy the <code>templates/krb5.conf</code> file provided in the Knox binary download and customize it to suit your cluster.</p><h4><a id="Update+`krb5JAASLogin.conf`+at+`/etc/knox/conf/krb5JAASLogin.conf`+on+Knox+host"></a>Update <code>krb5JAASLogin.conf</code> at <code>/etc/knox/conf/krb5JAASLogin.conf</code> on Knox host</h4><p>You could copy the <code>templates/krb5JAASLogin.conf</code> file provided in the Knox binary download and customize it to suit your cluster.</p><h4><a id="Update+`gateway-site.xml`+on+Knox+host+on+Knox+host"></a>Update <code>gateway-site.xml</code> on Knox host on Knox host</h4><p>Update <code>conf/gateway-site.xml</code> in your Knox installation and set the value of <code>gateway.hadoop.kerberos.secured</code> to true.</p><h4><a id="Restart+Knox"></a>Restart Knox</h4><p>After you do the above con
 figurations and restart Knox, Knox would use SPNego to authenticate with Hadoop services and Oozie. There is not change in the way you make calls to Knox whether you use Curl or Knox DSL.</p><h2><a id="Client+Details"></a>Client Details</h2><p>Hadoop requires a client that can be used to interact remotely with the services provided by Hadoop cluster. This will also be true when using the Apache Knox Gateway to provide perimeter security and centralized access for these services. The two primary existing clients for Hadoop are the CLI (i.e. Command Line Interface, hadoop) and HUE (i.e. Hadoop User Environment). For several reasons however, neither of these clients can <em>currently</em> be used to access Hadoop services via the Apache Knox Gateway.</p><p>This led to thinking about a very simple client that could help people use and evaluate the gateway. The list below outlines the general requirements for such a client.</p>
 <ul>
   <li>Promote the evaluation and adoption of the Apache Knox Gateway</li>
   <li>Simple to deploy and use on data worker desktops to access to remote Hadoop clusters</li>
@@ -933,7 +1016,7 @@ dep/commons-codec-1.7.jar
     <li>JSON Path <a href="https://code.google.com/p/json-path/">API</a></li>
     <li>GPath <a href="http://groovy.codehaus.org/GPath">Overview</a></li>
   </ul></li>
-</ul><h2>{{Service Details}}</h2><p>TODO</p><h3><a id="WebHDFS"></a>WebHDFS</h3><p>TODO</p><h4><a id="WebHDFS+URL+Mapping"></a>WebHDFS URL Mapping</h4><p>TODO</p><h4><a id="WebHDFS+Examples"></a>WebHDFS Examples</h4><p>TODO</p><h4><a id="Assumptions"></a>Assumptions</h4><p>This document assumes a few things about your environment in order to simplify the examples.</p>
+</ul><h2><a id="Service+Details"></a>Service Details</h2><p>TODO - Service details overview</p><h3><a id="WebHDFS"></a>WebHDFS</h3><p>TODO</p><h4><a id="WebHDFS+URL+Mapping"></a>WebHDFS URL Mapping</h4><p>TODO</p><h4><a id="WebHDFS+Examples"></a>WebHDFS Examples</h4><p>TODO</p><h4><a id="Assumptions"></a>Assumptions</h4><p>This document assumes a few things about your environment in order to simplify the examples.</p>
 <ul>
   <li>The JVM is executable as simply java.</li>
   <li>The Apache Knox Gateway is installed and functional.</li>
@@ -1250,7 +1333,7 @@ curl -i -k -u bob:bob-password -X DELETE
   <li>A few examples optionally require the use of commands from a standard Groovy installation. These examples are optional but to try them you will need Groovy [installed|http://groovy.codehaus.org/Installing+Groovy].</li>
 </ol><h3><a id="HBase+Stargate+Setup"></a>HBase Stargate Setup</h3><h4><a id="Launch+Stargate"></a>Launch Stargate</h4><p>The command below launches the Stargate daemon on port 60080</p>
 <pre><code>sudo /usr/lib/hbase/bin/hbase-daemon.sh start rest -p 60080
-</code></pre><p>60080 post is used because it was specified in sample Hadoop cluster deployment {{{GATEWAY_HOME}}}/deployments/sample.xml.</p><h4><a id="Configure+Sandbox+port+mapping+for+VirtualBox"></a>Configure Sandbox port mapping for VirtualBox</h4>
+</code></pre><p>60080 post is used because it was specified in sample Hadoop cluster deployment <code>{GATEWAY_HOME}/deployments/sample.xml</code>.</p><h4><a id="Configure+Sandbox+port+mapping+for+VirtualBox"></a>Configure Sandbox port mapping for VirtualBox</h4>
 <ol>
   <li>Select the VM</li>
   <li>Select menu Machine&gt;Settings&hellip;</li>
@@ -1260,7 +1343,7 @@ curl -i -k -u bob:bob-password -X DELETE
   <li>Press Plus button to insert new rule: Name=Stargate, Host Port=60080, Guest Port=60080</li>
   <li>Press OK to close the rule window</li>
   <li>Press OK to Network window save the changes</li>
-</ol><p>60080 post is used because it was specified in sample Hadoop cluster deployment {{{GATEWAY_HOME}}}/deployments/sample.xml.</p><h3><a id="HBase/Stargate+via+KnoxShell+DSL"></a>HBase/Stargate via KnoxShell DSL</h3><h4><a id="Usage"></a>Usage</h4><p>For more details about client DSL usage please follow this [page|https://cwiki.apache.org/confluence/display/KNOX/Client+Usage].</p><h5><a id="systemVersion()+-+Query+Software+Version."></a>systemVersion() - Query Software Version.</h5>
+</ol><p>60080 post is used because it was specified in sample Hadoop cluster deployment <code>{GATEWAY_HOME}/deployments/sample.xml</code>.</p><h3><a id="HBase/Stargate+via+KnoxShell+DSL"></a>HBase/Stargate via KnoxShell DSL</h3><h4><a id="Usage"></a>Usage</h4><p>For more details about client DSL usage please follow this [page|https://cwiki.apache.org/confluence/display/KNOX/Client+Usage].</p><h5><a id="systemVersion()+-+Query+Software+Version."></a>systemVersion() - Query Software Version.</h5>
 <ul>
   <li>Request
   <ul>
@@ -1272,7 +1355,7 @@ curl -i -k -u bob:bob-password -X DELETE
   </ul></li>
   <li>Example
   <ul>
-    <li>{{HBase.session(session).systemVersion().now().string}}</li>
+    <li><code>HBase.session(session).systemVersion().now().string</code></li>
   </ul></li>
 </ul><h5><a id="clusterVersion()+-+Query+Storage+Cluster+Version."></a>clusterVersion() - Query Storage Cluster Version.</h5>
 <ul>
@@ -1286,7 +1369,7 @@ curl -i -k -u bob:bob-password -X DELETE
   </ul></li>
   <li>Example
   <ul>
-    <li>{{HBase.session(session).clusterVersion().now().string}}</li>
+    <li><code>HBase.session(session).clusterVersion().now().string</code></li>
   </ul></li>
 </ul><h5><a id="status()+-+Query+Storage+Cluster+Status."></a>status() - Query Storage Cluster Status.</h5>
 <ul>
@@ -1300,7 +1383,7 @@ curl -i -k -u bob:bob-password -X DELETE
   </ul></li>
   <li>Example
   <ul>
-    <li>{{HBase.session(session).status().now().string}}</li>
+    <li><code>HBase.session(session).status().now().string</code></li>
   </ul></li>
 </ul><h5><a id="table().list()+-+Query+Table+List."></a>table().list() - Query Table List.</h5>
 <ul>
@@ -1313,7 +1396,7 @@ curl -i -k -u bob:bob-password -X DELETE
     <li>BasicResponse</li>
   </ul></li>
   <li>Example</li>
-  <li>{{HBase.session(session).table().list().now().string}}</li>
+  <li><code>HBase.session(session).table().list().now().string</code></li>
 </ul><h5><a id="table(String+tableName).schema()+-+Query+Table+Schema."></a>table(String tableName).schema() - Query Table Schema.</h5>
 <ul>
   <li>Request
@@ -1326,7 +1409,7 @@ curl -i -k -u bob:bob-password -X DELETE
   </ul></li>
   <li>Example
   <ul>
-    <li>{{HBase.session(session).table().schema().now().string}}</li>
+    <li><code>HBase.session(session).table().schema().now().string</code></li>
   </ul></li>
 </ul><h5><a id="table(String+tableName).create()+-+Create+Table+Schema."></a>table(String tableName).create() - Create Table Schema.</h5>
 <ul>
@@ -1343,7 +1426,18 @@ curl -i -k -u bob:bob-password -X DELETE
   </ul></li>
   <li>Example
   <ul>
-    <li>{{HBase.session(session).table(tableName).create()}}  {{.attribute(&ldquo;tb_attr1&rdquo;, &ldquo;value1&rdquo;)}}  {{.attribute(&ldquo;tb_attr2&rdquo;, &ldquo;value2&rdquo;)}}  {{.family(&ldquo;family1&rdquo;)}}  {{.attribute(&ldquo;fm_attr1&rdquo;, &ldquo;value3&rdquo;)}}  {{.attribute(&ldquo;fm_attr2&rdquo;, &ldquo;value4&rdquo;)}}  {{.endFamilyDef()}}  {{.family(&ldquo;family2&rdquo;)}}  {{.family(&ldquo;family3&rdquo;)}}  {{.endFamilyDef()}}  {{.attribute(&ldquo;tb_attr3&rdquo;, &ldquo;value5&rdquo;)}}  {{.now()}}</li>
+    <li><code>HBase.session(session).table(tableName).create()
+   .attribute(&quot;tb_attr1&quot;, &quot;value1&quot;)
+   .attribute(&quot;tb_attr2&quot;, &quot;value2&quot;)
+   .family(&quot;family1&quot;)
+   .attribute(&quot;fm_attr1&quot;, &quot;value3&quot;)
+   .attribute(&quot;fm_attr2&quot;, &quot;value4&quot;)
+   .endFamilyDef()
+   .family(&quot;family2&quot;)
+   .family(&quot;family3&quot;)
+   .endFamilyDef()
+   .attribute(&quot;tb_attr3&quot;, &quot;value5&quot;)
+   .now()</code></li>
   </ul></li>
 </ul><h5><a id="table(String+tableName).update()+-+Update+Table+Schema."></a>table(String tableName).update() - Update Table Schema.</h5>
 <ul>
@@ -1359,7 +1453,14 @@ curl -i -k -u bob:bob-password -X DELETE
   </ul></li>
   <li>Example
   <ul>
-    <li>{{HBase.session(session).table(tableName).update()}}  {{.family(&ldquo;family1&rdquo;)}}  {{.attribute(&ldquo;fm_attr1&rdquo;, &ldquo;new_value3&rdquo;)}}  {{.endFamilyDef()}}  {{.family(&ldquo;family4&rdquo;)}}  {{.attribute(&ldquo;fm_attr3&rdquo;, &ldquo;value6&rdquo;)}}  {{.endFamilyDef()}}  {{.now()}}</li>
+    <li><code>HBase.session(session).table(tableName).update()
+ .family(&quot;family1&quot;)
+     .attribute(&quot;fm_attr1&quot;, &quot;new_value3&quot;)
+ .endFamilyDef()
+ .family(&quot;family4&quot;)
+     .attribute(&quot;fm_attr3&quot;, &quot;value6&quot;)
+ .endFamilyDef()
+ .now()</code></li>
   </ul></li>
 </ul><h5><a id="table(String+tableName).regions()+-+Query+Table+Metadata."></a>table(String tableName).regions() - Query Table Metadata.</h5>
 <ul>
@@ -1373,7 +1474,7 @@ curl -i -k -u bob:bob-password -X DELETE
   </ul></li>
   <li>Example
   <ul>
-    <li>{{HBase.session(session).table(tableName).regions().now().string}}</li>
+    <li><code>HBase.session(session).table(tableName).regions().now().string</code></li>
   </ul></li>
 </ul><h5><a id="table(String+tableName).delete()+-+Delete+Table."></a>table(String tableName).delete() - Delete Table.</h5>
 <ul>
@@ -1387,7 +1488,7 @@ curl -i -k -u bob:bob-password -X DELETE
   </ul></li>
   <li>Example
   <ul>
-    <li>{{HBase.session(session).table(tableName).delete().now()}}</li>
+    <li><code>HBase.session(session).table(tableName).delete().now()</code></li>
   </ul></li>
 </ul><h5><a id="table(String+tableName).row(String+rowId).store()+-+Cell+Store."></a>table(String tableName).row(String rowId).store() - Cell Store.</h5>
 <ul>
@@ -1401,8 +1502,14 @@ curl -i -k -u bob:bob-password -X DELETE
   </ul></li>
   <li>Example
   <ul>
-    <li>{{HBase.session(session).table(tableName).row(&ldquo;row_id_1&rdquo;).store()}}  {{.column(&ldquo;family1&rdquo;, &ldquo;col1&rdquo;, &ldquo;col_value1&rdquo;)}}  {{.column(&ldquo;family1&rdquo;, &ldquo;col2&rdquo;, &ldquo;col_value2&rdquo;, 1234567890l)}}  {{.column(&ldquo;family2&rdquo;, null, &ldquo;fam_value1&rdquo;)}}  {{.now()}}</li>
-    <li>{{HBase.session(session).table(tableName).row(&ldquo;row_id_2&rdquo;).store()}}  {{.column(&ldquo;family1&rdquo;, &ldquo;row2_col1&rdquo;, &ldquo;row2_col_value1&rdquo;)}}  {{.now()}}</li>
+    <li><code>HBase.session(session).table(tableName).row(&quot;row_id_1&quot;).store()
+ .column(&quot;family1&quot;, &quot;col1&quot;, &quot;col_value1&quot;)
+ .column(&quot;family1&quot;, &quot;col2&quot;, &quot;col_value2&quot;, 1234567890l)
+ .column(&quot;family2&quot;, null, &quot;fam_value1&quot;)
+ .now()</code></li>
+    <li><code>HBase.session(session).table(tableName).row(&quot;row_id_2&quot;).store()
+ .column(&quot;family1&quot;, &quot;row2_col1&quot;, &quot;row2_col_value1&quot;)
+ .now()</code></li>
   </ul></li>
 </ul><h5><a id="table(String+tableName).row(String+rowId).query()+-+Cell+or+Row+Query."></a>table(String tableName).row(String rowId).query() - Cell or Row Query.</h5>
 <ul>
@@ -1421,9 +1528,16 @@ curl -i -k -u bob:bob-password -X DELETE
   </ul></li>
   <li>Example
   <ul>
-    <li>{{HBase.session(session).table(tableName).row(&ldquo;row_id_1&rdquo;)}}  {{.query()}}  {{.now().string}}</li>
-    <li>{{HBase.session(session).table(tableName).row().query().now().string}}</li>
-    <li>{{HBase.session(session).table(tableName).row().query()}}  {{.column(&ldquo;family1&rdquo;, &ldquo;row2_col1&rdquo;)}}  {{.column(&ldquo;family2&rdquo;)}}  {{.times(0, Long.MAX_VALUE)}}  {{.numVersions(1)}}  {{.now().string}}</li>
+    <li><code>HBase.session(session).table(tableName).row(&quot;row_id_1&quot;)
+ .query()
+ .now().string</code></li>
+    <li><code>HBase.session(session).table(tableName).row().query().now().string</code></li>
+    <li><code>HBase.session(session).table(tableName).row().query()
+ .column(&quot;family1&quot;, &quot;row2_col1&quot;)
+ .column(&quot;family2&quot;)
+ .times(0, Long.MAX_VALUE)
+ .numVersions(1)
+ .now().string</code></li>
   </ul></li>
 </ul><h5><a id="table(String+tableName).row(String+rowId).delete()+-+Row,+Column,+or+Cell+Delete."></a>table(String tableName).row(String rowId).delete() - Row, Column, or Cell Delete.</h5>
 <ul>
@@ -1438,8 +1552,15 @@ curl -i -k -u bob:bob-password -X DELETE
   </ul></li>
   <li>Example
   <ul>
-    <li>{{HBase.session(session).table(tableName).row(&ldquo;row_id_1&rdquo;)}}  {{.delete()}}  {{.column(&ldquo;family1&rdquo;, &ldquo;col1&rdquo;)}}  {{.now()}}</li>
-    <li>{{HBase.session(session).table(tableName).row(&ldquo;row_id_1&rdquo;)}}  {{.delete()}}  {{.column(&ldquo;family2&rdquo;)}}  {{.time(Long.MAX_VALUE)}}  {{.now()}}</li>
+    <li><code>HBase.session(session).table(tableName).row(&quot;row_id_1&quot;)
+ .delete()
+ .column(&quot;family1&quot;, &quot;col1&quot;)
+ .now()</code></li>
+    <li><code>HBase.session(session).table(tableName).row(&quot;row_id_1&quot;)
+ .delete()
+ .column(&quot;family2&quot;)
+ .time(Long.MAX_VALUE)
+ .now()</code></li>
   </ul></li>
 </ul><h5><a id="table(String+tableName).scanner().create()+-+Scanner+Creation."></a>table(String tableName).scanner().create() - Scanner Creation.</h5>
 <ul>
@@ -1462,7 +1583,17 @@ curl -i -k -u bob:bob-password -X DELETE
   </ul></li>
   <li>Example
   <ul>
-    <li>{{HBase.session(session).table(tableName).scanner().create()}}  {{.column(&ldquo;family1&rdquo;, &ldquo;col2&rdquo;)}}  {{.column(&ldquo;family2&rdquo;)}}  {{.startRow(&ldquo;row_id_1&rdquo;)}}  {{.endRow(&ldquo;row_id_2&rdquo;)}}  {{.batch(1)}}  {{.startTime(0)}}  {{.endTime(Long.MAX_VALUE)}}  {{.filter(&quot;&quot;)}}  {{.maxVersions(100)}}  {{.now()}}</li>
+    <li><code>HBase.session(session).table(tableName).scanner().create()
+ .column(&quot;family1&quot;, &quot;col2&quot;)
+ .column(&quot;family2&quot;)
+ .startRow(&quot;row_id_1&quot;)
+ .endRow(&quot;row_id_2&quot;)
+ .batch(1)
+ .startTime(0)
+ .endTime(Long.MAX_VALUE)
+ .filter(&quot;&quot;)
+ .maxVersions(100)
+ .now()</code></li>
   </ul></li>
 </ul><h5><a id="table(String+tableName).scanner(String+scannerId).getNext()+-+Scanner+Get+Next."></a>table(String tableName).scanner(String scannerId).getNext() - Scanner Get Next.</h5>
 <ul>
@@ -1476,7 +1607,7 @@ curl -i -k -u bob:bob-password -X DELETE
   </ul></li>
   <li>Example
   <ul>
-    <li>{{HBase.session(session).table(tableName).scanner(scannerId).getNext().now().string}}</li>
+    <li><code>HBase.session(session).table(tableName).scanner(scannerId).getNext().now().string</code></li>
   </ul></li>
 </ul><h5><a id="table(String+tableName).scanner(String+scannerId).delete()+-+Scanner+Deletion."></a>table(String tableName).scanner(String scannerId).delete() - Scanner Deletion.</h5>
 <ul>
@@ -1490,7 +1621,7 @@ curl -i -k -u bob:bob-password -X DELETE
   </ul></li>
   <li>Example
   <ul>
-    <li>{{HBase.session(session).table(tableName).scanner(scannerId).delete().now()}}</li>
+    <li><code>HBase.session(session).table(tableName).scanner(scannerId).delete().now()</code></li>
   </ul></li>
 </ul><h4><a id="Examples"></a>Examples</h4><p>This example illustrates sequence of all basic HBase operations: 1. get system version 2. get cluster version 3. get cluster status 4. create the table 5. get list of tables 6. get table schema 7. update table schema 8. insert single row into table 9. query row by id 10. query all rows 11. delete cell from row 12. delete entire column family from row 13. get table regions 14. create scanner 15. fetch values using scanner 16. drop scanner 17. drop the table</p><p>There are several ways to do this depending upon your preference.</p><p>You can use the Groovy interpreter provided with the distribution.</p>
 <pre><code>java -jar bin/shell.jar samples/ExampleHBaseUseCase.groovy
@@ -1919,25 +2050,25 @@ connection.close();
 2012-02-03 --- 18:35:34 --- SampleClass6 --- [TRACE]
 2012-02-03 --- 18:35:34 --- SampleClass2 --- [DEBUG]
 ...
-</code></pre><h2>{{Trouble Shooting}}</h2><h3><a id="Enabling+Logging"></a>Enabling Logging</h3><p>The <code>log4j.properties</code> files <code>&lt;GATEWAY_HOME&gt;/conf</code> can be used to change the granularity of the logging done by Knox. &nbsp;The Knox server must be restarted in order for these changes to take effect. There are various useful loggers pre-populated in that file but they are commented out.</p>
+</code></pre><h2><a id="Troubleshooting"></a>Troubleshooting</h2><h3><a id="Connection+Errors"></a>Connection Errors</h3><p>TODO - Explain how to debug connection errors.</p><h3><a id="Enabling+Logging"></a>Enabling Logging</h3><p>The <code>log4j.properties</code> files <code>{GATEWAY_HOME}/conf</code> can be used to change the granularity of the logging done by Knox. The Knox server must be restarted in order for these changes to take effect. There are various useful loggers pre-populated but commented out.</p>
 <pre><code>log4j.logger.org.apache.hadoop.gateway=DEBUG # Use this logger to increase the debugging of Apache Knox itself.
 log4j.logger.org.apache.shiro=DEBUG          # Use this logger to increase the debugging of Apache Shiro.
 log4j.logger.org.apache.http=DEBUG           # Use this logger to increase the debugging of Apache HTTP components.
 log4j.logger.org.apache.http.client=DEBUG    # Use this logger to increase the debugging of Apache HTTP client component.
 log4j.logger.org.apache.http.headers=DEBUG   # Use this logger to increase the debugging of Apache HTTP header.
 log4j.logger.org.apache.http.wire=DEBUG      # Use this logger to increase the debugging of Apache HTTP wire traffic.
-</code></pre><h3><a id="Filing+Bugs"></a>Filing Bugs</h3><p>h2. Filing bugs</p><p>Bugs can be filed using <a href="https://issues.apache.org/jira/browse/KNOX">Jira</a>. Please include the results of this command below in the Environment section. Also include the version of Hadoop being used.</p>
+</code></pre><h3><a id="Filing+Bugs"></a>Filing Bugs</h3><p>Bugs can be filed using <a href="https://issues.apache.org/jira/browse/KNOX">Jira</a>. Please include the results of this command below in the Environment section. Also include the version of Hadoop being used in the same section.</p>
 <pre><code>java -jar bin/server.jar -version
 </code></pre><h2><a id="Export+Controls"></a>Export Controls</h2><p>Apache Knox Gateway includes cryptographic software. The country in which you currently reside may have restrictions on the import, possession, use, and/or re-export to another country, of encryption software. BEFORE using any encryption software, please check your country&rsquo;s laws, regulations and policies concerning the import, possession, or use, and re-export of encryption software, to see if this is permitted. See <a href="http://www.wassenaar.org">http://www.wassenaar.org</a> for more information.</p><p>The U.S. Government Department of Commerce, Bureau of Industry and Security (BIS), has classified this software as Export Commodity Control Number (ECCN) 5D002.C.1, which includes information security software using or performing cryptographic functions with asymmetric algorithms. The form and manner of this Apache Software Foundation distribution makes it eligible for export under the License Exception ENC
  Technology Software Unrestricted (TSU) exception (see the BIS Export Administration Regulations, Section 740.13) for both object code and source code.</p><p>The following provides more details on the included cryptographic software:</p>
 <ul>
   <li>Apache Knox Gateway uses the ApacheDS which in turn uses Bouncy Castle generic encryption libraries.</li>
   <li>See <a href="http://www.bouncycastle.org">http://www.bouncycastle.org</a> for more details on Bouncy Castle.</li>
   <li>See <a href="http://directory.apache.org/apacheds">http://directory.apache.org/apacheds</a> for more details on ApacheDS.</li>
-</ul><h2>Disclaimer</h2><p>The Apache Knox Gateway is an effort undergoing incubation at the Apache Software Foundation (ASF), sponsored by the Apache Incubator PMC.</p><p>Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects.</p><p>While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.</p><h2>Trademarks</h2><p>Apache Knox Gateway, Apache, the Apache feather logo and the Apache Knox Gateway project logos are trademarks of The Apache Software Foundation. All other marks mentioned may be trademarks or registered trademarks of their respective owners.</p><h2>License</h2><p>Apache Knox uses the standard <a href="http://www.apache.org/licenses/LICENSE-2.0">Apache license</a>.</p><h2>Privacy Poli
 cy</h2><p>Apache Knox uses the standard Apache privacy policy.</p><p>Information about your use of this website is collected using server access logs and a tracking cookie. The collected information consists of the following:</p>
+</ul><h2><a id="Disclaimer"></a>Disclaimer</h2><p>The Apache Knox Gateway is an effort undergoing incubation at the Apache Software Foundation (ASF), sponsored by the Apache Incubator PMC.</p><p>Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects.</p><p>While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.</p><h2><a id="Trademarks"></a>Trademarks</h2><p>Apache Knox, Apache Knox Gateway, Apache, the Apache feather logo and the Apache Knox Gateway project logos are trademarks of The Apache Software Foundation. All other marks mentioned may be trademarks or registered trademarks of their respective owners.</p><h2><a id="License"></a>License</h2><p>Apache Knox uses the standard <a href="http
 ://www.apache.org/licenses/LICENSE-2.0">Apache license</a>.</p><h2><a id="Privacy+Policy"></a>Privacy Policy</h2><p>Apache Knox uses the standard Apache privacy policy.</p><p>Information about your use of this website is collected using server access logs and a tracking cookie. The collected information consists of the following:</p>
 <ul>
   <li>The IP address from which you access the website;</li>
   <li>The type of browser and operating system you use to access our site;</li>
   <li>The date and time you access our site;</li>
   <li>The pages you visit; and</li>
   <li>The addresses of pages from where you followed a link to our site.</li>
-</ul><p>Part of this information is gathered using a tracking cookie set by the <a href="http://www.google.com/analytics/">Google Analytics</a> service and handled by Google as described in their <a href="http://www.google.com/privacy.html">privacy policy</a>. See your browser documentation for instructions on how to disable the cookie if you prefer not to share this data with Google.</p><p>We use the gathered information to help us make our site more useful to visitors and to better understand how and when our site is used. We do not track or collect personally identifiable information or associate gathered data with any personally identifying information from other sources.</p><p>By using this website, you consent to the collection of this data in the manner and for the purpose described above.</p>
\ No newline at end of file
+</ul><p>Part of this information is gathered using a tracking cookie set by the <a href="http://www.google.com/analytics/">Google Analytics</a> service. Google&rsquo;s policy for the use of this information is described in their <a href="http://www.google.com/privacy.html">privacy policy</a>. See your browser&rsquo;s documentation for instructions on how to disable the cookie if you prefer not to share this data with Google.</p><p>We use the gathered information to help us make our site more useful to visitors and to better understand how and when our site is used. We do not track or collect personally identifiable information or associate gathered data with any personally identifying information from other sources.</p><p>By using this website, you consent to the collection of this data in the manner and for the purpose described above.</p>
\ No newline at end of file

Modified: incubator/knox/trunk/books/0.3.0/book.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/book.md?rev=1526721&r1=1526720&r2=1526721&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/book.md (original)
+++ incubator/knox/trunk/books/0.3.0/book.md Thu Sep 26 22:01:13 2013
@@ -49,7 +49,7 @@
     * [Oozie](#Oozie)
     * [HBase/Starbase](#HBase)
     * [Hive](#Hive)
-* [Trouble Shooting](#Trouble+Shooting)
+* [Troubleshooting](#Troubleshooting)
 * [Export Controls](#Export+Controls)
 
 
@@ -73,7 +73,7 @@ In general the goals of the gateway are 
 <<book_gateway-details.md>>
 <<book_client-details.md>>
 <<book_service-details.md>>
-<<book_trouble-shooting.md>>
+<<book_troubleshooting.md>>
 
 
 ## Export Controls ##

Copied: incubator/knox/trunk/books/0.3.0/book_troubleshooting.md (from r1526719, incubator/knox/trunk/books/0.3.0/book_trouble-shooting.md)
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/book_troubleshooting.md?p2=incubator/knox/trunk/books/0.3.0/book_troubleshooting.md&p1=incubator/knox/trunk/books/0.3.0/book_trouble-shooting.md&r1=1526719&r2=1526721&rev=1526721&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/book_trouble-shooting.md (original)
+++ incubator/knox/trunk/books/0.3.0/book_troubleshooting.md Thu Sep 26 22:01:13 2013
@@ -15,7 +15,7 @@
    limitations under the License.
 --->
 
-## Trouble Shooting ##
+## Troubleshooting ##
 
 ### Connection Errors ###