You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@knox.apache.org by km...@apache.org on 2013/09/26 23:53:28 UTC

svn commit: r1526719 - in /incubator/knox/trunk/books: 0.3.0/ common/

Author: kminder
Date: Thu Sep 26 21:53:28 2013
New Revision: 1526719

URL: http://svn.apache.org/r1526719
Log:
Cleanup of getting started section.

Modified:
    incubator/knox/trunk/books/0.3.0/book.md
    incubator/knox/trunk/books/0.3.0/book_client-details.md
    incubator/knox/trunk/books/0.3.0/book_gateway-details.md
    incubator/knox/trunk/books/0.3.0/book_getting-started.md
    incubator/knox/trunk/books/0.3.0/book_service-details.md
    incubator/knox/trunk/books/0.3.0/book_trouble-shooting.md
    incubator/knox/trunk/books/0.3.0/config.md
    incubator/knox/trunk/books/0.3.0/config_authz.md
    incubator/knox/trunk/books/0.3.0/config_kerberos.md
    incubator/knox/trunk/books/0.3.0/config_sandbox.md
    incubator/knox/trunk/books/0.3.0/service_hbase.md
    incubator/knox/trunk/books/common/footer.md
    incubator/knox/trunk/books/common/header.md

Modified: incubator/knox/trunk/books/0.3.0/book.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/book.md?rev=1526719&r1=1526718&r2=1526719&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/book.md (original)
+++ incubator/knox/trunk/books/0.3.0/book.md Thu Sep 26 21:53:28 2013
@@ -60,10 +60,13 @@ The goal is to simplify Hadoop security 
 The gateway runs as a server (or cluster of servers) that provide centralized access to one or more Hadoop clusters.
 In general the goals of the gateway are as follows:
 
-* Provide perimeter security for Hadoop REST APIs to make Hadoop security setup easier
-* Support authentication and token verification security scenarios
-* Deliver users a single URL end-point that aggregates capabilities for data and jobs
-* Enable integration with enterprise and cloud identity management environments
+* Provide perimeter security for Hadoop REST APIs to make Hadoop security easier to setup and use
+    * Provide authentication and token verification at the perimeter
+    * Enable authentication integration with enterprise and cloud identity management systems
+    * Provide service level authorization at the perimeter
+* Expose a single URL hierarchy that aggregates REST APIs of a Hadoop cluster
+    * Limit the network endpoints (and therefore firewall holes) required to access a Hadoop cluster
+    * Hide the internal Hadoop cluster topology from potential attackers
 
 
 <<book_getting-started.md>>
@@ -76,12 +79,18 @@ In general the goals of the gateway are 
 ## Export Controls ##
 
 Apache Knox Gateway includes cryptographic software.
-The country in which you currently reside may have restrictions on the import, possession, use, and/or re-export to another country, of encryption software.
-BEFORE using any encryption software, please check your country's laws, regulations and policies concerning the import, possession, or use, and re-export of encryption software, to see if this is permitted.
+The country in which you currently reside may have restrictions on the import, possession, use, and/or
+re-export to another country, of encryption software.
+BEFORE using any encryption software, please check your country's laws, regulations and policies concerning the
+import, possession, or use, and re-export of encryption software, to see if this is permitted.
 See http://www.wassenaar.org for more information.
 
-The U.S. Government Department of Commerce, Bureau of Industry and Security (BIS), has classified this software as Export Commodity Control Number (ECCN) 5D002.C.1, which includes information security software using or performing cryptographic functions with asymmetric algorithms.
-The form and manner of this Apache Software Foundation distribution makes it eligible for export under the License Exception ENC Technology Software Unrestricted (TSU) exception (see the BIS Export Administration Regulations, Section 740.13) for both object code and source code.
+The U.S. Government Department of Commerce, Bureau of Industry and Security (BIS),
+has classified this software as Export Commodity Control Number (ECCN) 5D002.C.1,
+which includes information security software using or performing cryptographic functions with asymmetric algorithms.
+The form and manner of this Apache Software Foundation distribution makes it eligible for export under the
+License Exception ENC Technology Software Unrestricted (TSU) exception
+(see the BIS Export Administration Regulations, Section 740.13) for both object code and source code.
 
 The following provides more details on the included cryptographic software:
 

Modified: incubator/knox/trunk/books/0.3.0/book_client-details.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/book_client-details.md?rev=1526719&r1=1526718&r2=1526719&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/book_client-details.md (original)
+++ incubator/knox/trunk/books/0.3.0/book_client-details.md Thu Sep 26 21:53:28 2013
@@ -15,8 +15,7 @@
    limitations under the License.
 --->
 
-{{Client Details}}
-------------------
+## Client Details ##
 
 Hadoop requires a client that can be used to interact remotely with the services provided by Hadoop cluster.
 This will also be true when using the Apache Knox Gateway to provide perimeter security and centralized access for these services.

Modified: incubator/knox/trunk/books/0.3.0/book_gateway-details.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/book_gateway-details.md?rev=1526719&r1=1526718&r2=1526719&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/book_gateway-details.md (original)
+++ incubator/knox/trunk/books/0.3.0/book_gateway-details.md Thu Sep 26 21:53:28 2013
@@ -15,8 +15,7 @@
    limitations under the License.
 --->
 
-{{Gateway Details}}
--------------------
+## Gateway Details ##
 
 TODO
 

Modified: incubator/knox/trunk/books/0.3.0/book_getting-started.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/book_getting-started.md?rev=1526719&r1=1526718&r2=1526719&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/book_getting-started.md (original)
+++ incubator/knox/trunk/books/0.3.0/book_getting-started.md Thu Sep 26 21:53:28 2013
@@ -31,18 +31,19 @@ Use the command below to check the versi
 
 #### Hadoop ####
 
-An an existing Hadoop 1.x or 2.x cluster is required for Knox to protect.
+An an existing Hadoop 1.x or 2.x cluster is required for Knox sit in front of and protect.
 One of the easiest ways to ensure this it to utilize a Hortonworks Sandbox VM.
 It is possible to use a Hadoop cluster deployed on EC2 but this will require additional configuration not covered here.
 It is also possible to use a limited set of services in Hadoop cluster secured with Kerberos.
 This too required additional configuration that is not described here.
+See the [table provided](#Supported+Services) for details on what is supported for this release.
 
 The Hadoop cluster should be ensured to have at least WebHDFS, WebHCat (i.e. Templeton) and Oozie configured, deployed and running.
 HBase/Stargate and Hive can also be accessed via the Knox Gateway given the proper versions and configuration.
 
 The instructions that follow assume a few things:
 
-1. The gateway is *not* collocated with the Hadoop clusters themselves 
+1. The gateway is *not* collocated with the Hadoop clusters themselves.
 2. The host names and IP addresses of the cluster services are accessible by the gateway where ever it happens to be running.
 
 All of the instructions and samples provided here are tailored and tested to work "out of the box" against a [Hortonworks Sandbox 2.x VM][sandbox].
@@ -50,12 +51,11 @@ All of the instructions and samples prov
 
 ### Download ###
 
-Download and extract the knox-{VERSION}.zip file into the installation directory.
-This directory will be referred to as your `{GATEWAY_HOME}`.
-You can find the downloads for Knox releases on the [Apache mirrors][mirror].
+Download one of the distributions below from the [Apache mirrors][mirror].
 
 * Source archive: [knox-incubating-0.3.0-src.zip][src-zip] ([PGP signature][src-pgp], [SHA1 digest][src-sha], [MD5 digest][src-md5])
 * Binary archive: [knox-incubating-0.3.0.zip][bin-zip] ([PGP signature][bin-pgp], [SHA1 digest][bin-sha], [MD5 digest][bin-md5])
+* RPM package: [knox-incubating-0.3.0.rpm][rpm] ([PGP signature][rpm-pgp], [SHA1 digest][rpm-sha], [MD5 digest][rpm-md5])
 
 [src-zip]: http://www.apache.org/dyn/closer.cgi/incubator/knox/0.3.0/knox-incubating-0.3.0-src.zip
 [src-sha]: http://www.apache.org/dist/incubator/knox/0.3.0/knox-incubating-0.3.0-src.zip.sha
@@ -65,20 +65,23 @@ You can find the downloads for Knox rele
 [bin-pgp]: http://www.apache.org/dist/incubator/knox/0.3.0/knox-incubating-0.3.0.zip.asc
 [bin-sha]: http://www.apache.org/dist/incubator/knox/0.3.0/knox-incubating-0.3.0.zip.sha
 [bin-md5]: http://www.apache.org/dist/incubator/knox/0.3.0/knox-incubating-0.3.0.zip.md5
+[rpm]: http://www.apache.org/dyn/closer.cgi/incubator/knox/0.3.0/knox-incubating-0.3.0.rpm
+[rpm-sha]: http://www.apache.org/dist/incubator/knox/0.3.0/knox-incubating-0.3.0.rpm.sha
+[rpm-pgp]: http://www.apache.org/dist/incubator/knox/0.3.0/knox-0.3.0-incubating.rpm.asc
+[rpm-md5]: http://www.apache.org/dist/incubator/knox/0.3.0/knox-incubating-0.3.0.rpm.md5
 
 Apache Knox Gateway releases are available under the [Apache License, Version 2.0][asl].
 See the NOTICE file contained in each release artifact for applicable copyright attribution notices.
 
 
-{{Verify}}
-------------------------
+### Verify ###
 
-It is essential that you verify the integrity of the downloaded files using the PGP signatures.
-Please read Verifying Apache HTTP Server Releases for more information on why you should verify our releases.
+It is essential that you verify the integrity of any downloaded files using the PGP signatures.
+Please read [Verifying Apache HTTP Server Releases](http://httpd.apache.org/dev/verification.html) for more information on why you should verify our releases.
 
 The PGP signatures can be verified using PGP or GPG.
 First download the KEYS file as well as the .asc signature files for the relevant release packages.
-Make sure you get these files from the main distribution directory, rather than from a mirror.
+Make sure you get these files from the main distribution directory linked above, rather than from a mirror.
 Then verify the signatures using one of the methods below.
 
     % pgpk -a KEYS
@@ -97,14 +100,21 @@ or
 
 ### Install ###
 
+The steps required to install the gateway will vary depending upon which distribution format was downloaded.
+In either case you will end up with a directory where the gateway is installed.
+This directory will be referred to as your `{GATEWAY_HOME}` throughout this document.
+
 #### ZIP ####
 
-Download and extract the `knox-{VERSION}.zip` file into the installation directory that will contain your `{GATEWAY_HOME}`.
-You can find the downloads for Knox releases on the [Apache mirrors][mirror].
+If you downloaded the Zip distribution you can simply extract the contents into a directory.
+The example below provides a command that can be executed to do this.
+Note the `{VERSION}` portion of the command must be replaced with an actual Apache Knox Gateway version number.
+This might be 0.3.0 for example and must patch the value in the file downloaded.
 
-    jar xf knox-{VERSION}.zip
+    jar xf knox-incubating-{VERSION}.zip
 
-This will create a directory `knox-{VERSION}` in your current directory.
+This will create a directory `knox-incubating-{VERSION}` in your current directory.
+The directory `knox-incubating-{VERSION}` will considered your `{GATEWAY_HOME}`
 
 
 #### RPM ####
@@ -114,7 +124,26 @@ TODO
 
 #### Layout ####
 
-TODO - Describe the purpose of all of the directories
+Regardless of the installation method used the layout and content of the `{GATEWAY_HOME}` will be identical.
+The table below provides a brief explanation of the important files and directories within `{GATEWWAY_HOME}`
+
+| Directory     | Purpose |
+| ------------- | ------- |
+| conf/         | Contains configuration files that apply to the gateway globally (i.e. not cluster specific ).       |
+| bin/          | Contains the executable shell scripts, batch files and JARs for clients and servers.                |
+| deployments/  | Contains topology descriptors used to configure the gateway for specific Hadoop clusters.           |
+| lib/          | Contains the JARs for all the components that make up the gateway.                                  |
+| dep/          | Contains the JARs for all of the components upon which the gateway depends.                         |
+| ext/          | A directory where user supplied extension JARs can be placed to extends the gateways functionality. |
+| samples/      | Contains a number of samples that can be used to explore the functionality of the gateway.          |
+| templates/    | Contains default configuration files that can be copied and customized.                             |
+| README        | Provides basic information about the Apache Knox Gateway.                                           |
+| ISSUES        | Describes significant know issues.                                                                  |
+| CHANGES       | Enumerates the changes between releases.                                                            |
+| INSTALL       | Provides simple installation instructions.                                                          |
+| LICENSE       | Documents the license under which this software is provided.                                        |
+| NOTICE        | Documents required attribution notices for included dependencies.                                   |
+| DISCLAIMER    | Documents that this release is from a project undergoing incubation at Apache.                      |
 
 
 ### Supported Services ###
@@ -124,44 +153,55 @@ Only more recent versions of some Hadoop
 
 | Service           | Version    | Non-Secure  | Secure |
 | ----------------- | ---------- | ----------- | ------ |
-| WebHDFS           | 2.1.0      | ![y]        | ![?]![y]   |
-| WebHCat/Templeton | 0.11.0     | ![y]        | ![?]![n]   |
-| Ozzie             | 4.0.0      | ![y]        | ![?]   |
+| WebHDFS           | 2.1.0      | ![y]        | ![y]   |
+| WebHCat/Templeton | 0.11.0     | ![y]        | ![y]   |
+| Ozzie             | 4.0.0      | ![y]        | ![y]   |
 | HBase/Stargate    | 0.95.2     | ![y]        | ![?]   |
+| Hive/WebHCat      | 0.11.0     | ![y]        | ![y]   |
+|                   | 0.12.0     | ![y]        | ![y]   |
 | Hive/JDBC         | 0.11.0     | ![n]        | ![n]   |
 |                   | 0.12.0     | ![?]![y]    | ![?]   |
 | Hive/ODBC         | 0.12.0     | ![?]        | ![?]   |
 
-ProxyUser feature of WebHDFS, WebHCat and Oozie required for secure cluster support seem to work fine.
-Knox code seems to be broken for support of secure cluster at this time for WebHDFS, WebHCat and Oozie.
-
 
 ### Basic Usage ###
 
+The steps described below are intended to get the Knox Gateway server up and running in its default configuration.
+Once that is accomplished a very simple example of using the gateway to interact with a Hadoop cluster is provided.
+More detailed configuration information is provided in the [Gateway Details](#Gateway+Details) section.
+More detailed examples for using each Hadoop service can be found in the [Service Details](#Service+Details) section.
+
+Note that *nix conventions are used throughout this section but in general the Windows alternative should be obvious.
+In situations where this is not the case a Windows alternative will be provided.
+
 #### Starting Servers ####
 
 ##### 1. Enter the `{GATEWAY_HOME}` directory
 
-    cd knox-{VERSION}
+    cd knox-incubation-{VERSION}
 
-The fully qualified name of this directory will be referenced as `{GATEWAY_HOME}}} throughout the remainder of this document.
+The fully qualified name of this directory will be referenced as `{GATEWAY_HOME}` throughout this document.
 
 ##### 2. Start the demo LDAP server (ApacheDS)
 
 First, understand that the LDAP server provided here is for demonstration purposes.
-You may configure the LDAP specifics within the topology descriptor for the cluster as described in step 5 below, in order to customize what LDAP instance to use.
+You may configure the gateway to utilize other LDAP systems via the topology descriptor.
+This is described in step 5 below.
 The assumption is that most users will leverage the demo LDAP server while evaluating this release and should therefore continue with the instructions here in step 3.
 
-Edit `{GATEWAY_HOME}/conf/users.ldif` if required and add your users and groups to the file.
-A sample end user "bob" has been already included.
+Edit `{GATEWAY_HOME}/conf/users.ldif` if required and add any desired users and groups to the file.
+A sample end user "guest" has been already included.
 Note that the passwords in this file are "fictitious" and have nothing to do with the actual accounts on the Hadoop cluster you are using.
 There is also a copy of this file in the templates directory that you can use to start over if necessary.
+This file is only used by the demo LDAP server.
 
-Start the LDAP server - pointing it to the config dir where it will find the users.ldif file in the conf directory.
+Start the LDAP server - pointing it to the config dir where it will find the `users.ldif` file in the conf directory.
 
     java -jar bin/ldap.jar conf &
 
-There are a number of log messages of the form {{Created null.` that can safely be ignored.
+_On windows this command can be run in its own command windows instead of running it in the background via `&`.
+
+There are a number of log messages of the form `Created null.` that can safely be ignored.
 Take note of the port on which it was started as this needs to match later configuration.
 
 ##### 3. Start the gateway server
@@ -170,11 +210,12 @@ Take note of the port on which it was st
 
 Take note of the port identified in the logging output as you will need this for accessing the gateway.
 
-The server will prompt you for the master secret (password).
-This secret is used to secure artifacts used to secure artifacts used by the gateway server for things like SSL, credential/password aliasing.
+The server will prompt you for the master secret (i.e. password).
+This secret is used to secure artifacts used by the gateway server for things like SSL and credential/password aliasing.
 This secret will have to be entered at startup unless you choose to persist it.
+See the Persisting the Master section for more information.
 Remember this secret and keep it safe.
-It represents the keys to the kingdom. See the Persisting the Master section for more information.
+It represents the keys to the kingdom.
 
 ##### 4. Configure the Gateway with the topology of your Hadoop cluster
 
@@ -183,10 +224,11 @@ Edit the file `{GATEWAY_HOME}/deployment
 Change the host and port in the urls of the `<service>` elements for WEBHDFS, WEBHCAT, OOZIE, WEBHBASE and HIVE services to match your Hadoop cluster deployment.
 
 The default configuration contains the LDAP URL for a LDAP server.
-By default that file is configured to access the demo ApacheDS based LDAP
-server and its default configuration. By default, this server listens on port 33389.
+By default that file is configured to access the demo ApacheDS based LDAP server and its default configuration.
+The ApacheDS based LDAP server listens on port 33389 by default.
 Optionally, you can change the LDAP URL for the LDAP server to be used for authentication.
-This is set via the main.ldapRealm.contextFactory.url property in the `<gateway><provider><authentication>` section.
+This is set via the `main.ldapRealm.contextFactory.url` property in the `<gateway><provider><authentication>` section.
+If you use an LDAP system other than the demo LDAP server you may need to change additional configuration as well.
 
 Save the file.
 The directory `{GATEWAY_HOME}/deployments` is monitored by the gateway server.
@@ -194,14 +236,19 @@ When a new or changed cluster topology d
 Note that the name of the file excluding the extension is also used as the path for that cluster in the URL.
 For example the `sandbox.xml` file will result in gateway URLs of the form `http://{gateway-host}:{gateway-port}/gateway/sandbox/webhdfs`.
 
-##### 5. Test the installation and configuration of your Gateway
+##### 5. Test the installation
 
-Invoke the LISTSATUS operation on HDFS represented by your configured NAMENODE by using your web browser or curl:
+Invoke the LISTSATUS operation on WebHDFS via the gateway.
+This will return a directory listing of the root (i.e. /) directory of HDFS.
 
     curl -i -k -u bob:bob-password -X GET \
         'https://localhost:8443/gateway/sandbox/webhdfs/v1/?op=LISTSTATUS'
 
-The results of the above command should result in something to along the lines of the output below.  The exact information returned is subject to the content within HDFS in your Hadoop cluster.
+The results of the above command should result in something to along the lines of the output below.
+The exact information returned is subject to the content within HDFS in your Hadoop cluster.
+Successfully executing this command at a minimum proves that the gateway is properly configured to provide access to WebHDFS.
+It does not necessarily provide that any of the other services are correct configured to be accessible.
+To validate that see the sections for the individual services in [Service Details](#Service+Details)
 
     HTTP/1.1 200 OK
     Content-Type: application/json
@@ -215,7 +262,7 @@ The results of the above command should 
     {"accessTime":0,"blockSize":0,"group":"hdfs","length":0,"modificationTime":1350595857178,"owner":"hdfs","pathSuffix":"user","permission":"755","replication":0,"type":"DIRECTORY"}
     ]}}
 
-For additional information on WebHDFS, Templeton/WebHCat and Oozie REST APIs, see the following URLs respectively:
+For additional information on WebHDFS, WebHCat/Templeton, Oozie and HBase/Stargate REST APIs, see the following URLs respectively:
 
 * WebHDFS - http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html
 * WebHCat (Templeton) - http://people.apache.org/~thejas/templeton_doc_v1
@@ -231,36 +278,3 @@ These examples provide more detail about
 * [Oozie](#Oozie+Examples)
 * [HBase](#HBase+Examples)
 * [Hive](#Hive+Examples)
-
-
-{{Sandbox Configuration}}
--------------------------
-
-This version of the Apache Knox Gateway is tested against [Hortonworks Sandbox 2.x|sandbox]
-
-Currently there is an issue with Sandbox that prevents it from being easily used with the gateway.
-In order to correct the issue, you can use the commands below to login to the Sandbox VM and modify the configuration.
-This assumes that the name sandbox is setup to resolve to the Sandbox VM.
-It may be necessary to use the IP address of the Sandbox VM instead.
-*This is frequently but not always `192.168.56.101`.*
-
-    ssh root@sandbox
-    cp /usr/lib/hadoop/conf/hdfs-site.xml /usr/lib/hadoop/conf/hdfs-site.xml.orig
-    sed -e s/localhost/sandbox/ /usr/lib/hadoop/conf/hdfs-site.xml.orig > /usr/lib/hadoop/conf/hdfs-site.xml
-    shutdown -r now
-
-
-In addition to make it very easy to follow along with the samples for the gateway you can configure your local system to resolve the address of the Sandbox by the names `vm` and `sandbox`.
-The IP address that is shown below should be that of the Sandbox VM as it is known on your system.
-*This will likely, but not always, be `192.168.56.101`.*
-
-On Linux or Macintosh systems add a line like this to the end of the file `/etc/hosts` on your local machine, *not the Sandbox VM*.
-_Note: The character between the 192.168.56.101 and vm below is a *tab* character._
-
-    192.168.56.101	vm sandbox
-
-On Windows systems a similar but different mechanism can be used.  On recent
-versions of windows the file that should be modified is `%systemroot%\system32\drivers\etc\hosts`
-
-
-

Modified: incubator/knox/trunk/books/0.3.0/book_service-details.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/book_service-details.md?rev=1526719&r1=1526718&r2=1526719&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/book_service-details.md (original)
+++ incubator/knox/trunk/books/0.3.0/book_service-details.md Thu Sep 26 21:53:28 2013
@@ -15,10 +15,9 @@
    limitations under the License.
 --->
 
-{{Service Details}}
--------------------
+## Service Details ##
 
-TODO
+TODO - Service details overview
 
 <<service_webhdfs.md>>
 <<service_webhcat.md>>

Modified: incubator/knox/trunk/books/0.3.0/book_trouble-shooting.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/book_trouble-shooting.md?rev=1526719&r1=1526718&r2=1526719&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/book_trouble-shooting.md (original)
+++ incubator/knox/trunk/books/0.3.0/book_trouble-shooting.md Thu Sep 26 21:53:28 2013
@@ -15,13 +15,18 @@
    limitations under the License.
 --->
 
-{{Trouble Shooting}}
---------------------
+## Trouble Shooting ##
+
+### Connection Errors ###
+
+TODO - Explain how to debug connection errors.
+
 
 ### Enabling Logging ###
 
-The `log4j.properties` files `<GATEWAY_HOME>/conf` can be used to change the granularity of the logging done by Knox. &nbsp;The Knox server must be restarted in order for these changes to take effect.
-There are various useful loggers pre-populated in that file but they are commented out.
+The `log4j.properties` files `{GATEWAY_HOME}/conf` can be used to change the granularity of the logging done by Knox.
+The Knox server must be restarted in order for these changes to take effect.
+There are various useful loggers pre-populated but commented out.
 
     log4j.logger.org.apache.hadoop.gateway=DEBUG # Use this logger to increase the debugging of Apache Knox itself.
     log4j.logger.org.apache.shiro=DEBUG          # Use this logger to increase the debugging of Apache Shiro.
@@ -30,13 +35,12 @@ There are various useful loggers pre-pop
     log4j.logger.org.apache.http.headers=DEBUG   # Use this logger to increase the debugging of Apache HTTP header.
     log4j.logger.org.apache.http.wire=DEBUG      # Use this logger to increase the debugging of Apache HTTP wire traffic.
 
-### Filing Bugs ###
 
-h2. Filing bugs
+### Filing Bugs ###
 
-Bugs can be filed using [Jira](https://issues.apache.org/jira/browse/KNOX).
+Bugs can be filed using [Jira][jira].
 Please include the results of this command below in the Environment section.
-Also include the version of Hadoop being used.
+Also include the version of Hadoop being used in the same section.
 
     java -jar bin/server.jar -version
 

Modified: incubator/knox/trunk/books/0.3.0/config.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/config.md?rev=1526719&r1=1526718&r2=1526719&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/config.md (original)
+++ incubator/knox/trunk/books/0.3.0/config.md Thu Sep 26 21:53:28 2013
@@ -15,12 +15,11 @@
    limitations under the License.
 --->
 
-{{Configuration}}
------------------
+### Configuration ###
 
-### Host Mapping ###
+#### Host Mapping ####
 
-TODO
+TODO - Complete Host Mapping docs.
 
 That really depends upon how you have your VM configured.
 If you can hit http://c6401.ambari.apache.org:1022/ directly from your client and knox host then you probably don't need the hostmap at all.
@@ -33,19 +32,19 @@ Please try it and file a jira if that do
 If so, simply either remove the full provider config for hostmap or remove the <param/> that defines the mapping.
 
 
-### Logging ###
+#### Logging ####
 
 If necessary you can enable additional logging by editing the `log4j.properties` file in the `conf` directory.
 Changing the rootLogger value from `ERROR` to `DEBUG` will generate a large amount of debug logging.
 A number of useful, more fine loggers are also provided in the file.
 
 
-### Java VM Options ###
+#### Java VM Options ####
 
-TODO
+TODO - Java VM options doc.
 
 
-### Persisting the Master Secret ###
+#### Persisting the Master Secret ####
 
 The master secret is required to start the server.
 This secret is used to access secured artifacts by the gateway instance.
@@ -64,7 +63,7 @@ Do not assume that the encryption if suf
 A specific user should be created to run the gateway this will protect a persisted master file.
 
 
-### Management of Security Artifacts ###
+#### Management of Security Artifacts ####
 
 There are a number of artifacts that are used by the gateway in ensuring the security of wire level communications, access to protected resources and the encryption of sensitive data.
 These artifacts can be managed from outside of the gateway instances or generated and populated by the gateway instance itself.

Modified: incubator/knox/trunk/books/0.3.0/config_authz.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/config_authz.md?rev=1526719&r1=1526718&r2=1526719&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/config_authz.md (original)
+++ incubator/knox/trunk/books/0.3.0/config_authz.md Thu Sep 26 21:53:28 2013
@@ -134,7 +134,7 @@ The above configuration enables the auth
 
     <param>
         <name>{serviceName}.acl</name>
-        <value>username[,*|username…];group[,*|group…];ipaddr[,*|ipaddr…]</value>
+        <value>username[,*|username...];group[,*|group...];ipaddr[,*|ipaddr...]</value>
     </param>
 
 where `{serverName}` would need to be the name of a configured Hadoop service within the topology.
@@ -197,11 +197,15 @@ this configuration indicates that ONE of
 
 The principal mapping aspect of the identity assertion provider is important to understand in order to fully utilize the authorization features of this provider.
 
-This feature allows us to map the authenticated principal to a runas or impersonated principal to be asserted to the Hadoop services in the backend. When a principal mapping is defined that results in an impersonated principal being created the impersonated principal is then the effective principal. If there is no mapping to another principal then the authenticated or primary principal is then the effective principal. Principal mapping has actually been available in the identity assertion provider from the beginning of Knox. Although hasn’t been adequately documented as of yet.
+This feature allows us to map the authenticated principal to a runas or impersonated principal to be asserted to the Hadoop services in the backend.
+When a principal mapping is defined that results in an impersonated principal being created the impersonated principal is then the effective principal.
+If there is no mapping to another principal then the authenticated or primary principal is then the effective principal.
+Principal mapping has actually been available in the identity assertion provider from the beginning of Knox.
+Although hasn't been adequately documented as of yet.
 
     <param>
         <name>principal.mapping</name>
-        <value>{primaryPrincipal}[,…]={impersonatedPrincipal}[;…]</value>
+        <value>{primaryPrincipal}[,...]={impersonatedPrincipal}[;...]</value>
     </param>
 
 For instance:
@@ -241,7 +245,7 @@ In addition, we allow the administrator 
 
     <param>
         <name>group.principal.mapping</name>
-        <value>{userName[,*|userName…]}={groupName[,groupName…]}[,…]</value>
+        <value>{userName[,*|userName...]}={groupName[,groupName...]}[,...]</value>
     </param>
 
 For instance:

Modified: incubator/knox/trunk/books/0.3.0/config_kerberos.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/config_kerberos.md?rev=1526719&r1=1526718&r2=1526719&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/config_kerberos.md (original)
+++ incubator/knox/trunk/books/0.3.0/config_kerberos.md Thu Sep 26 21:53:28 2013
@@ -15,13 +15,10 @@
    limitations under the License.
 --->
 
-{{Secure Clusters}}
--------------------
+### Secure Clusters ###
 
 If your Hadoop cluster is secured with Kerberos authentication, you have to do the following on Knox side.
 
-### Secure the Hadoop Cluster ###
-
 Please secure Hadoop services with Keberos authentication.
 
 Please see instructions at
@@ -30,11 +27,11 @@ and
 [http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.3.1/bk_installing_manually_book/content/rpm-chap14.html]
 
 
-### Create Unix account for Knox on Hadoop master nodes ###
+#### Create Unix account for Knox on Hadoop master nodes ####
 
-    useradd \-g hadoop knox
+    useradd -g hadoop knox
 
-### Create Kerberos principal, keytab for Knox
+#### Create Kerberos principal, keytab for Knox ####
 
 One way of doing this, assuming your KDC realm is EXAMPLE.COM
 
@@ -44,7 +41,7 @@ ssh into your host running KDC
     add_principal -randkey knox/knox@EXAMPLE.COM
     ktadd -norandkey -k /etc/security/keytabs/knox.service.keytab
 
-### Grant Proxy privileges for Knox user in `core-site.xml` on Hadoop master nodes
+#### Grant Proxy privileges for Knox user in `core-site.xml` on Hadoop master nodes ####
 
 Update `core-site.xml` and add the following lines towards the end of the file.
 
@@ -60,7 +57,7 @@ You could use * for local developer test
         <value>FQDN_OF_KNOX_HOST</value>
     </property>
 
-### Grant proxy privilege for Knox in `oozie-stie.xml` on Oozie host ###
+#### Grant proxy privilege for Knox in `oozie-stie.xml` on Oozie host ####
 
 Update `oozie-site.xml` and add the following lines towards the end of the file.
 
@@ -76,7 +73,7 @@ You could use * for local developer test
        <value>FQDN_OF_KNOX_HOST</value>
     </property>
 
-### Copy knox keytab to Knox host ###
+#### Copy knox keytab to Knox host ####
 
 Please add unix account for knox on Knox host
 
@@ -88,22 +85,22 @@ Please copy knox.service.keytab created 
     chmod 400 knox.service.keytab
 
 
-### Update krb5.conf at /etc/knox/conf/krb5.conf on Knox host ###
+#### Update krb5.conf at /etc/knox/conf/krb5.conf on Knox host ####
 
 You could copy the `templates/krb5.conf` file provided in the Knox binary download and customize it to suit your cluster.
 
 
-### Update `krb5JAASLogin.conf` at `/etc/knox/conf/krb5JAASLogin.conf` on Knox host ###
+#### Update `krb5JAASLogin.conf` at `/etc/knox/conf/krb5JAASLogin.conf` on Knox host ####
 
 You could copy the `templates/krb5JAASLogin.conf` file provided in the Knox binary download and customize it to suit your cluster.
 
 
-### Update `gateway-site.xml` on Knox host on Knox host ###
+#### Update `gateway-site.xml` on Knox host on Knox host ####
 
 Update `conf/gateway-site.xml` in your Knox installation and set the value of `gateway.hadoop.kerberos.secured` to true.
 
 
-### Restart Knox ###
+#### Restart Knox ####
 
 After you do the above configurations and restart Knox, Knox would use SPNego to authenticate with Hadoop services and Oozie.
 There is not change in the way you make calls to Knox whether you use Curl or Knox DSL.

Modified: incubator/knox/trunk/books/0.3.0/config_sandbox.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/config_sandbox.md?rev=1526719&r1=1526718&r2=1526719&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/config_sandbox.md (original)
+++ incubator/knox/trunk/books/0.3.0/config_sandbox.md Thu Sep 26 21:53:28 2013
@@ -15,25 +15,35 @@
    limitations under the License.
 --->
 
-{{Sandbox Configuration}}
--------------------------
+### Sandbox 2.x Configuration ###
 
-This version of the Apache Knox Gateway is tested against [Hortonworks Sandbox 1.2|http://hortonworks.com/products/hortonworks-sandbox/]
+TODO
 
-Currently there is an issue with Sandbox that prevents it from being easily used with the gateway.  In order to correct the issue, you can use the commands below to login to the Sandbox VM and modify the configuration.  This assumes that the name sandbox is setup to resolve to the Sandbox VM.  It may be necessary to use the IP address of the Sandbox VM instead. *This is frequently but not always* {{{*}192.168.56.101{*}}}*.*
+### Sandbox 1.x Configuration ###
+
+TODO - Update this section to use hostmap if that simplifies things.
+
+This version of the Apache Knox Gateway is tested against [Hortonworks Sandbox 1.x][sandbox]
+
+Currently there is an issue with Sandbox that prevents it from being easily used with the gateway.
+In order to correct the issue, you can use the commands below to login to the Sandbox VM and modify the configuration.
+This assumes that the name sandbox is setup to resolve to the Sandbox VM.
+It may be necessary to use the IP address of the Sandbox VM instead.
+*This is frequently but not always `192.168.56.101`.*
 
     ssh root@sandbox
     cp /usr/lib/hadoop/conf/hdfs-site.xml /usr/lib/hadoop/conf/hdfs-site.xml.orig
     sed -e s/localhost/sandbox/ /usr/lib/hadoop/conf/hdfs-site.xml.orig > /usr/lib/hadoop/conf/hdfs-site.xml
     shutdown -r now
 
+In addition to make it very easy to follow along with the samples for the gateway you can configure your local system to resolve the address of the Sandbox by the names `vm` and `sandbox`.
+The IP address that is shown below should be that of the Sandbox VM as it is known on your system.
+*This will likely, but not always, be `192.168.56.101`.*
 
-In addition to make it very easy to follow along with the samples for the gateway you can configure your local system to resolve the address of the Sandbox by the names {{vm}} and {{sandbox}}.  The IP address that is shown below should be that of the Sandbox VM as it is known on your system.  This will likely, but not always, be {{192.168.56.101}}.
-
-On Linux or Macintosh systems add a line like this to the end of the file&nbsp;{{/etc/hosts}}&nbsp;on your local machine, *not the Sandbox VM*.
-_Note: The character between the {{{_}192.168.56.101{_}}} and {{{_}vm{_}}} below is a *{_}tab{_}* character._
+On Linux or Macintosh systems add a line like this to the end of the file `/etc/hosts` on your local machine, *not the Sandbox VM*.
+_Note: The character between the 192.168.56.101 and vm below is a *tab* character._
 
     192.168.56.101	vm sandbox
 
 On Windows systems a similar but different mechanism can be used.  On recent
-versions of windows the file that should be modified is {{%systemroot%\system32\drivers\etc\hosts}}
+versions of windows the file that should be modified is `%systemroot%\system32\drivers\etc\hosts`

Modified: incubator/knox/trunk/books/0.3.0/service_hbase.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/service_hbase.md?rev=1526719&r1=1526718&r2=1526719&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/service_hbase.md (original)
+++ incubator/knox/trunk/books/0.3.0/service_hbase.md Thu Sep 26 21:53:28 2013
@@ -46,7 +46,7 @@ The command below launches the Stargate 
 
     sudo /usr/lib/hbase/bin/hbase-daemon.sh start rest -p 60080
 
-60080 post is used because it was specified in sample Hadoop cluster deployment {{\{GATEWAY_HOME\}}}/deployments/sample.xml.
+60080 post is used because it was specified in sample Hadoop cluster deployment `{GATEWAY_HOME}/deployments/sample.xml`.
 
 #### Configure Sandbox port mapping for VirtualBox
 
@@ -59,7 +59,7 @@ The command below launches the Stargate 
 7. Press OK to close the rule window
 8. Press OK to Network window save the changes
 
-60080 post is used because it was specified in sample Hadoop cluster deployment {{\{GATEWAY_HOME\}}}/deployments/sample.xml.
+60080 post is used because it was specified in sample Hadoop cluster deployment `{GATEWAY_HOME}/deployments/sample.xml`.
 
 ### HBase/Stargate via KnoxShell DSL
 
@@ -73,7 +73,7 @@ For more details about client DSL usage 
 * Response
     * BasicResponse
 * Example
-    * {{HBase.session(session).systemVersion().now().string}}
+    * `HBase.session(session).systemVersion().now().string`
 
 ##### clusterVersion() - Query Storage Cluster Version.
 
@@ -82,7 +82,7 @@ For more details about client DSL usage 
 * Response
     * BasicResponse
 * Example
-    * {{HBase.session(session).clusterVersion().now().string}}
+    * `HBase.session(session).clusterVersion().now().string`
 
 ##### status() - Query Storage Cluster Status.
 
@@ -91,7 +91,7 @@ For more details about client DSL usage 
 * Response
     * BasicResponse
 * Example
-    * {{HBase.session(session).status().now().string}}
+    * `HBase.session(session).status().now().string`
 
 ##### table().list() - Query Table List.
 
@@ -100,7 +100,7 @@ For more details about client DSL usage 
 * Response
     * BasicResponse
 * Example
-  * {{HBase.session(session).table().list().now().string}}
+  * `HBase.session(session).table().list().now().string`
 
 ##### table(String tableName).schema() - Query Table Schema.
 
@@ -109,7 +109,7 @@ For more details about client DSL usage 
 * Response
     * BasicResponse
 * Example
-    * {{HBase.session(session).table().schema().now().string}}
+    * `HBase.session(session).table().schema().now().string`
 
 ##### table(String tableName).create() - Create Table Schema.
 * Request
@@ -120,18 +120,18 @@ For more details about client DSL usage 
 * Response
     * EmptyResponse
 * Example
-    * {{HBase.session(session).table(tableName).create()}}
-     {{.attribute("tb_attr1", "value1")}}
-     {{.attribute("tb_attr2", "value2")}}
-     {{.family("family1")}}
-         {{.attribute("fm_attr1", "value3")}}
-         {{.attribute("fm_attr2", "value4")}}
-     {{.endFamilyDef()}}
-     {{.family("family2")}}
-     {{.family("family3")}}
-     {{.endFamilyDef()}}
-     {{.attribute("tb_attr3", "value5")}}
-     {{.now()}}
+    * ```HBase.session(session).table(tableName).create()
+       .attribute("tb_attr1", "value1")
+       .attribute("tb_attr2", "value2")
+       .family("family1")
+           .attribute("fm_attr1", "value3")
+           .attribute("fm_attr2", "value4")
+       .endFamilyDef()
+       .family("family2")
+       .family("family3")
+       .endFamilyDef()
+       .attribute("tb_attr3", "value5")
+       .now()```
 
 ##### table(String tableName).update() - Update Table Schema.
 * Request
@@ -141,14 +141,14 @@ For more details about client DSL usage 
 * Response
     * EmptyResponse
 * Example
-    * {{HBase.session(session).table(tableName).update()}}
-     {{.family("family1")}}
-         {{.attribute("fm_attr1", "new_value3")}}
-     {{.endFamilyDef()}}
-     {{.family("family4")}}
-         {{.attribute("fm_attr3", "value6")}}
-     {{.endFamilyDef()}}
-     {{.now()}}
+    * ```HBase.session(session).table(tableName).update()
+         .family("family1")
+             .attribute("fm_attr1", "new_value3")
+         .endFamilyDef()
+         .family("family4")
+             .attribute("fm_attr3", "value6")
+         .endFamilyDef()
+         .now()```
 
 ##### table(String tableName).regions() - Query Table Metadata.
 * Request
@@ -156,7 +156,7 @@ For more details about client DSL usage 
 * Response
     * BasicResponse
 * Example
-    * {{HBase.session(session).table(tableName).regions().now().string}}
+    * `HBase.session(session).table(tableName).regions().now().string`
 
 ##### table(String tableName).delete() - Delete Table.
 * Request
@@ -164,7 +164,7 @@ For more details about client DSL usage 
 * Response
     * EmptyResponse
 * Example
-    * {{HBase.session(session).table(tableName).delete().now()}}
+    * `HBase.session(session).table(tableName).delete().now()`
 
 ##### table(String tableName).row(String rowId).store() - Cell Store.
 * Request
@@ -172,14 +172,14 @@ For more details about client DSL usage 
 * Response
     * EmptyResponse
 * Example
-    * {{HBase.session(session).table(tableName).row("row_id_1").store()}}
-     {{.column("family1", "col1", "col_value1")}}
-     {{.column("family1", "col2", "col_value2", 1234567890l)}}
-     {{.column("family2", null, "fam_value1")}}
-     {{.now()}}
-    * {{HBase.session(session).table(tableName).row("row_id_2").store()}}
-     {{.column("family1", "row2_col1", "row2_col_value1")}}
-     {{.now()}}
+    * ```HBase.session(session).table(tableName).row("row_id_1").store()
+         .column("family1", "col1", "col_value1")
+         .column("family1", "col2", "col_value2", 1234567890l)
+         .column("family2", null, "fam_value1")
+         .now()```
+    * ```HBase.session(session).table(tableName).row("row_id_2").store()
+         .column("family1", "row2_col1", "row2_col_value1")
+         .now()```
 
 ##### table(String tableName).row(String rowId).query() - Cell or Row Query.
 * rowId is optional. Querying with null or empty rowId will select all rows.
@@ -192,16 +192,16 @@ For more details about client DSL usage 
 * Response
     * BasicResponse
 * Example
-    * {{HBase.session(session).table(tableName).row("row_id_1")}}
-     {{.query()}}
-     {{.now().string}}
-    * {{HBase.session(session).table(tableName).row().query().now().string}}
-    * {{HBase.session(session).table(tableName).row().query()}}
-     {{.column("family1", "row2_col1")}}
-     {{.column("family2")}}
-     {{.times(0, Long.MAX_VALUE)}}
-     {{.numVersions(1)}}
-     {{.now().string}}
+    * ```HBase.session(session).table(tableName).row("row_id_1")
+         .query()
+         .now().string```
+    * `HBase.session(session).table(tableName).row().query().now().string`
+    * ```HBase.session(session).table(tableName).row().query()
+         .column("family1", "row2_col1")
+         .column("family2")
+         .times(0, Long.MAX_VALUE)
+         .numVersions(1)
+         .now().string```
 
 ##### table(String tableName).row(String rowId).delete() - Row, Column, or Cell Delete.
 * Request
@@ -210,15 +210,15 @@ For more details about client DSL usage 
 * Response
     * EmptyResponse
 * Example
-    * {{HBase.session(session).table(tableName).row("row_id_1")}}
-     {{.delete()}}
-     {{.column("family1", "col1")}}
-     {{.now()}}
-    * {{HBase.session(session).table(tableName).row("row_id_1")}}
-     {{.delete()}}
-     {{.column("family2")}}
-     {{.time(Long.MAX_VALUE)}}
-     {{.now()}}
+    * ```HBase.session(session).table(tableName).row("row_id_1")
+         .delete()
+         .column("family1", "col1")
+         .now()```
+    * ```HBase.session(session).table(tableName).row("row_id_1")
+         .delete()
+         .column("family2")
+         .time(Long.MAX_VALUE)
+         .now()```
 
 ##### table(String tableName).scanner().create() - Scanner Creation.
 * Request
@@ -235,17 +235,17 @@ For more details about client DSL usage 
 * Response
     * scannerId : String - the scanner ID of the created scanner. Consumes body.
 * Example
-    * {{HBase.session(session).table(tableName).scanner().create()}}
-     {{.column("family1", "col2")}}
-     {{.column("family2")}}
-     {{.startRow("row_id_1")}}
-     {{.endRow("row_id_2")}}
-     {{.batch(1)}}
-     {{.startTime(0)}}
-     {{.endTime(Long.MAX_VALUE)}}
-     {{.filter("")}}
-     {{.maxVersions(100)}}
-     {{.now()}}
+    * ```HBase.session(session).table(tableName).scanner().create()
+         .column("family1", "col2")
+         .column("family2")
+         .startRow("row_id_1")
+         .endRow("row_id_2")
+         .batch(1)
+         .startTime(0)
+         .endTime(Long.MAX_VALUE)
+         .filter("")
+         .maxVersions(100)
+         .now()```
 
 ##### table(String tableName).scanner(String scannerId).getNext() - Scanner Get Next.
 * Request
@@ -253,7 +253,7 @@ For more details about client DSL usage 
 * Response
     * BasicResponse
 * Example
-    * {{HBase.session(session).table(tableName).scanner(scannerId).getNext().now().string}}
+    * `HBase.session(session).table(tableName).scanner(scannerId).getNext().now().string`
 
 ##### table(String tableName).scanner(String scannerId).delete() - Scanner Deletion.
 * Request
@@ -261,7 +261,7 @@ For more details about client DSL usage 
 * Response
     * EmptyResponse
 * Example
-    * {{HBase.session(session).table(tableName).scanner(scannerId).delete().now()}}
+    * `HBase.session(session).table(tableName).scanner(scannerId).delete().now()`
 
 #### Examples
 

Modified: incubator/knox/trunk/books/common/footer.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/common/footer.md?rev=1526719&r1=1526718&r2=1526719&view=diff
==============================================================================
--- incubator/knox/trunk/books/common/footer.md (original)
+++ incubator/knox/trunk/books/common/footer.md Thu Sep 26 21:53:28 2013
@@ -15,8 +15,7 @@
    limitations under the License.
 --->
 
-Disclaimer
-----------
+## Disclaimer ##
 
 The Apache Knox Gateway is an effort undergoing incubation at the Apache Software Foundation (ASF), sponsored by the Apache Incubator PMC.
 
@@ -25,21 +24,18 @@ Incubation is required of all newly acce
 While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.
 
 
-Trademarks
-----------
+## Trademarks ##
 
-Apache Knox Gateway, Apache, the Apache feather logo and the Apache Knox Gateway project logos are trademarks of The Apache Software Foundation.
+Apache Knox, Apache Knox Gateway, Apache, the Apache feather logo and the Apache Knox Gateway project logos are trademarks of The Apache Software Foundation.
 All other marks mentioned may be trademarks or registered trademarks of their respective owners.
 
 
-License
--------
+## License ##
 
 Apache Knox uses the standard [Apache license][asl].
 
 
-Privacy Policy
---------------
+## Privacy Policy ##
 
 Apache Knox uses the standard Apache privacy policy.
 
@@ -52,9 +48,9 @@ The collected information consists of th
 * The pages you visit; and
 * The addresses of pages from where you followed a link to our site.
 
-Part of this information is gathered using a tracking cookie set by the [Google Analytics](http://www.google.com/analytics/) service and handled by Google as described in their [privacy policy](http://www.google.com/privacy.html).
-See your browser documentation for instructions on how to
-disable the cookie if you prefer not to share this data with Google.
+Part of this information is gathered using a tracking cookie set by the [Google Analytics](http://www.google.com/analytics/) service.
+Google's policy for the use of this information is described in their [privacy policy](http://www.google.com/privacy.html).
+See your browser's documentation for instructions on how to disable the cookie if you prefer not to share this data with Google.
 
 We use the gathered information to help us make our site more useful to visitors and to better understand how and when our site is used.
 We do not track or collect personally identifiable information or associate gathered data with any personally identifying information from other sources.

Modified: incubator/knox/trunk/books/common/header.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/common/header.md?rev=1526719&r1=1526718&r2=1526719&view=diff
==============================================================================
--- incubator/knox/trunk/books/common/header.md (original)
+++ incubator/knox/trunk/books/common/header.md Thu Sep 26 21:53:28 2013
@@ -18,8 +18,10 @@
 <link href="book.css" rel="stylesheet"/>
 
 [asl]: http://www.apache.org/licenses/LICENSE-2.0
-[sandbox]: http://hortonworks.com/products/hortonworks-sandbox
+[site]: http://knox.incubator.apache.org
+[jira]: https://issues.apache.org/jira/browse/KNOX
 [mirror]: http://www.apache.org/dyn/closer.cgi/incubator/knox
+[sandbox]: http://hortonworks.com/products/hortonworks-sandbox
 
 [y]: check.png "Yes"
 [n]: error.png "No"