You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@knox.apache.org by km...@apache.org on 2013/09/30 04:05:06 UTC

git commit: KNOX-154: Remove INSTALL file and refer to User's Guide

Updated Branches:
  refs/heads/master 17f278785 -> bd29a4c9a


KNOX-154: Remove INSTALL file and refer to User's Guide


Project: http://git-wip-us.apache.org/repos/asf/incubator-knox/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-knox/commit/bd29a4c9
Tree: http://git-wip-us.apache.org/repos/asf/incubator-knox/tree/bd29a4c9
Diff: http://git-wip-us.apache.org/repos/asf/incubator-knox/diff/bd29a4c9

Branch: refs/heads/master
Commit: bd29a4c9a817279ac20c6f39567f43546b71d1db
Parents: 17f2787
Author: Kevin Minder <ke...@hortonworks.com>
Authored: Sun Sep 29 22:05:02 2013 -0400
Committer: Kevin Minder <ke...@hortonworks.com>
Committed: Sun Sep 29 22:05:02 2013 -0400

----------------------------------------------------------------------
 gateway-release/home/CHANGES | 103 +++++++++++-----
 gateway-release/home/INSTALL | 251 --------------------------------------
 gateway-release/home/ISSUES  |   9 +-
 gateway-release/home/README  |  31 ++---
 4 files changed, 89 insertions(+), 305 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-knox/blob/bd29a4c9/gateway-release/home/CHANGES
----------------------------------------------------------------------
diff --git a/gateway-release/home/CHANGES b/gateway-release/home/CHANGES
index d3866a5..64c8654 100644
--- a/gateway-release/home/CHANGES
+++ b/gateway-release/home/CHANGES
@@ -1,19 +1,56 @@
 ------------------------------------------------------------------------------
-Changes v0.2.0 - v0.3.0
+Release Notes - Apache Knox - Version 0.3.0
 ------------------------------------------------------------------------------
 
-Release Notes - Apache Knox - Version 0.3.0
+** New Feature
+    * [KNOX-8] - Support HBase via HBase/Stargate
+    * [KNOX-9] - Support Hive via JDBC+ODBC/Thrift/HTTP
+    * [KNOX-11] - Access Token Federation Provider
+    * [KNOX-27] - Access Kerberos secured Hadoop cluster via gateway using basic auth credentials
+    * [KNOX-31] - Create lifecycle scripts for gateway server
+    * [KNOX-50] - Ensure that all cluster topology details are rewritten for Oozie REST APIs
+    * [KNOX-61] - Create RPM packaging of Knox
+    * [KNOX-68] - Create start/stop scripts for gateway
+    * [KNOX-70] - Add unit and functional testing for HBase
+    * [KNOX-71] - Add unit and functional tests for Hive
+    * [KNOX-72] - Update site docs for HBase integration
+    * [KNOX-73] - Update site docs for Hive integration
+    * [KNOX-82] - Support properties file format for topology files
+    * [KNOX-85] - Provide Knox client DSL for HBase REST API
+    * [KNOX-98] - Cover HBase in samples
+    * [KNOX-99] - Cover Hive in samples
+    * [KNOX-116] - Add rewrite function so that authenticated username can be used in rewrite rules
+    * [KNOX-120] - Service Level Authorization Provider with ACLs
+    * [KNOX-131] - Cleanup noisy test PropertyTopologyBuilderTest
+    * [KNOX-169] - Test issue for patch test automation via PreCommit-Knox-Build job
+
+** Improvement
+    * [KNOX-40] - Verify LDAP over SSL
+    * [KNOX-42] - Change gateway URLs to match service URLs as closely as possible
+    * [KNOX-45] - Clean up usage and help output from server command line
+    * [KNOX-49] - Prevent Shiro rememberMe cookie from being returned
+    * [KNOX-55] - Support finer grain control over what is included in the URL rewrite
+    * [KNOX-56] - Populate RC directory with CHANGES on people.a.o
+    * [KNOX-75] - make Knox work with Secure Oozie
+    * [KNOX-97] - Populate staging and release directories with KEYS
+    * [KNOX-100] - document steps to make Knox work with secure hadoodp cluster
+    * [KNOX-101] - Use session instead of hadoop in client DSL samples
+    * [KNOX-117] - Provide ServletContext attribute access to RewriteFunctionProcessor via UrlRewriteEnvironment
+    * [KNOX-118] - Provide rewrite functions that resolve service location information
+    * [KNOX-129] - Document topology file
+    * [KNOX-141] - Diagnostic debug output when generated SSL keystore info doesn't match environment
+    * [KNOX-143] - Change "out of the box" setup to use sandbox instead of sample
+    * [KNOX-153] - Document RPM based install process
+    * [KNOX-155] - Remove obsolete module gateway-demo
+    * [KNOX-164] - document hostmap provider properties
+    * [KNOX-168] - Complete User's Guide for 0.3.0 release
 
 ** Bug
-    * [KNOX-21] - Utilize hadoop.auth cookie to prevent re-authentication for every request
     * [KNOX-47] - Clean up i18n logging and any System.out or printStackTrace usages
     * [KNOX-57] - NPE when GATEWAY_HOME deleted out from underneath a running gateway instance
     * [KNOX-58] - NameNode endpoint exposed to gateway clients in runtime exception
     * [KNOX-60] - getting started - incorrect path to gateway-site.xml
     * [KNOX-69] - Branch expansion for specdir breaks on jenkins
-    * [KNOX-70] - Add unit and functional testing for HBase
-    * [KNOX-71] - Add unit and functional tests for Hive
-    * [KNOX-75] - make Knox work with Secure Oozie
     * [KNOX-76] - users.ldif file bundled with knox should not have hadoop service principals
     * [KNOX-77] - Need per-service outbound URL rewriting rules
     * [KNOX-78] - spnego authorization to cluster is failing
@@ -21,30 +58,40 @@ Release Notes - Apache Knox - Version 0.3.0
     * [KNOX-81] - Fix naming of release artifacts to include the word incubating
     * [KNOX-83] - do not use mapred as end user prinicpal in examples
     * [KNOX-84] - use EXAMPLE.COM instead of sample.com in template files for kerberos relam
-    * [KNOX-94] - Document How to Add a Service to Knox
-
-** Improvement
-    * [KNOX-9] - Support Hive via JDBC+ODBC/Thrift/HTTP
-    * [KNOX-45] - Clean up usage and help output from server command line
-    * [KNOX-49] - Prevent Shiro rememberMe cookie from being returned
-    * [KNOX-55] - Support finer grain control over what is included in the URL rewrite
-    * [KNOX-56] - Populate RC directory with CHANGES on people.a.o
-    * [KNOX-63] - Integate Knox with Apache BigTop
-    * [KNOX-68] - Create start/stop scripts for gateway
-    * [KNOX-82] - Support properties file format for topology files
-
-** New Feature
-    * [KNOX-8] - Support HBase via HBase/Stargate
-    * [KNOX-11] - Access Token Federation Provider
-    * [KNOX-27] - Access Kerberos secured Hadoop cluster via gateway using basic auth credentials
-    * [KNOX-31] - Create lifecycle scripts for gateway server
-    * [KNOX-61] - Create RPM packaging of Knox
-
-** Test
-    * [KNOX-40] - Verify LDAP over SSL
+    * [KNOX-89] - Knox doing SPNego with Hadoop for every client request is not scalable
+    * [KNOX-102] - Update README File
+    * [KNOX-106] - The Host request header should be rewritten or removed
+    * [KNOX-107] - Service URLs not rewritten for WebHDFS GET redirects
+    * [KNOX-108] - Authentication failure submitting job via WebHCAT on Sandbox
+    * [KNOX-109] - Failed to submit workflow via Oozie against Sandbox HDP2Beta
+    * [KNOX-111] - Ensure that user identity details are rewritten for Oozie REST APIs
+    * [KNOX-124] - Fix the OR semantics in AclAuthz
+    * [KNOX-126] - HiveDeploymentContributor uses wrong external path /hive/api/vi
+    * [KNOX-127] - Sample topology file (sample.xml) uses inconsistent internal vs external addresses
+    * [KNOX-128] - Switch all samples to use guest user and home directory
+    * [KNOX-130] - Throw exception on credential store creation failure
+    * [KNOX-132] - Cleanup noisy test GatewayBasicFuncTest.testOozieJobSubmission()
+    * [KNOX-136] - Knox should support configurable session timeout
+    * [KNOX-137] - Log SSL Certificate Info
+    * [KNOX-142] - Remove Templeton from user facing config and samples and use WebHCat instead
+    * [KNOX-144] - Ensure cluster topology details are rewritten for HBase/Stargate REST APIs
+    * [KNOX-146] - Oozie rewrite rules for NN and JT need to be updated to use hostmap
+    * [KNOX-147] - Halt Startup when Gateway SSL Cert is Expired
+    * [KNOX-148] - Add cluster topology details rewrite for XML responses from HBase/Stargate REST APIs
+    * [KNOX-149] - Changes to AclsAuthz Config and Default Mode
+    * [KNOX-150] - correct comment on session timeout  in sandbox topology file
+    * [KNOX-151] - add documentation for session timeout configuration
+    * [KNOX-152] - Dynamic redeploy of topo causes subsequent requests to fail
+    * [KNOX-154] - INSTALL file is out of date
+    * [KNOX-156] - file upload through Knox broken
+    * [KNOX-157] - Knox is not able to process PUT/POST requests with large payload
+    * [KNOX-158] - EmptyStackException while getting webhcat job queue in secure cluster
+    * [KNOX-159] - oozie job submission thorugh knox fails for secure cluster
+    * [KNOX-162] - Support Providing Your own SSL Certificate
+    * [KNOX-163] - job submission through knox-webchat results in NullPointerException
 
 ------------------------------------------------------------------------------
-Changes v0.1.0 - v0.2.0
+Release Notes - Apache Knox - Version 0.2.0
 ------------------------------------------------------------------------------
 HTTPS Support (Client side)
 Oozie Support

http://git-wip-us.apache.org/repos/asf/incubator-knox/blob/bd29a4c9/gateway-release/home/INSTALL
----------------------------------------------------------------------
diff --git a/gateway-release/home/INSTALL b/gateway-release/home/INSTALL
deleted file mode 100644
index 8d870a0..0000000
--- a/gateway-release/home/INSTALL
+++ /dev/null
@@ -1,251 +0,0 @@
-------------------------------------------------------------------------------
-Requirements
-------------------------------------------------------------------------------
-Java:
-  Java 1.6 or later
-
-Hadoop Cluster:
-  A local installation of a Hadoop Cluster is required at this time.  Hadoop
-  EC2 cluster and/or Sandbox installations are currently difficult to access
-  remotely via the Gateway. The EC2 and Sandbox limitation is caused by
-  Hadoop services running with internal IP addresses.  For the Gateway to work
-  in these cases it will need to be deployed on the EC2 cluster or Sandbox, at
-  this time.
-
-  The instructions that follow assume that the Gateway is *not* collocated
-  with the Hadoop clusters themselves and (most importantly) that the
-  hostnames and IP addresses of the cluster services are accessible by the
-  gateway where ever it happens to be running.
-
-  The Hadoop cluster should be ensured to have WebHDFS, WebHCat
-  (i.e. Templeton) and Oozie configured, deployed and running.
-
-------------------------------------------------------------------------------
-Installation and Deployment Instructions
-------------------------------------------------------------------------------
-1. Install
-     Download and extract the knox-{VERSION}.zip file into the
-     installation directory that will contain your GATEWAY_HOME
-       jar xf knox-{VERSION}.zip
-     This will create a directory 'gateway' in your current directory.
-
-2. Enter Gateway Home directory
-     cd gateway
-   The fully qualified name of this directory will be referenced as
-   {GATEWAY_HOME} throughout the remainder of this document.
-
-3. Start the demo LDAP server (ApacheDS)
-   a. First, understand that the LDAP server provided here is for demonstration
-      purposes. You may configure the LDAP specifics within the topology
-      descriptor for the cluster as described in step 5 below, in order to
-      customize what LDAP instance to use. The assumption is that most users
-      will leverage the demo LDAP server while evaluating this release and
-      should therefore continue with the instructions here in step 3.
-   b. Edit {GATEWAY_HOME}/conf/users.ldif if required and add your users and
-      groups to the file.  A number of normal Hadoop users
-      (e.g. hdfs, mapred, hcat, hive) have already been included.  Note that
-      the passwords in this file are "fictitious" and have nothing to do with
-      the actual accounts on the Hadoop cluster you are using.  There is also
-      a copy of this file in the templates directory that you can use to start
-      over if necessary.
-   c. Start the LDAP server - pointing it to the config dir where it will find
-      the users.ldif file in the conf directory.
-        java -jar bin/ldap.jar conf &
-      There are a number of log messages of the form "Created null." that can
-      safely be ignored.  Take note of the port on which it was started as this
-      needs to match later configuration.  This will create a directory named
-      'org.apache.hadoop.gateway.security.EmbeddedApacheDirectoryServer' that
-      can safely be ignored.
-
-4. Start the Gateway server
-     java -jar bin/server.jar
-   a. Take note of the port identified in the logging output as you will need this for
-      accessing the gateway.
-   b. The server will prompt you for the master secret (password). This secret is used
-      to secure artifacts used to secure artifacts used by the gateway server for
-      things like SSL, credential/password aliasing. This secret will have to be entered
-      at startup unless you choose to persist it. Remember this secret and keep it safe.
-      It represents the keys to the kingdom. See the Persisting the Master section for
-      more information.
-
-5. Configure the Gateway with the topology of your Hadoop cluster
-   a. Edit the file {GATEWAY_HOME}/deployments/sample.xml
-   b. Change the host and port in the urls of the <service> elements for
-      WEBHDFS, WEBHCAT, OOZIE, etc. services to match your Hadoop cluster
-      deployment.
-   c. The default configuration contains the LDAP URL for a LDAP server.  By
-      default that file is configured to access the demo ApacheDS based LDAP
-      server and its default configuration. By default, this server listens on
-      port 33389.  Optionally, you can change the LDAP URL for the LDAP server
-      to be used for authentication.  This is set via the
-      main.ldapRealm.contextFactory.url property in the
-      <gateway><provider><authentication> section.
-   d. Save the file.  The directory {GATEWAY_HOME}/deployments is monitored
-      by the Gateway server and reacts to the discovery of a new or changed
-      cluster topology descriptor by provisioning the endpoints and required
-      filter chains to serve the needs of each cluster as described by the
-      topology file.  Note that the name of the file excluding the extension
-      is also used as the path for that cluster in the URL.  So for example
-      the sample.xml file will result in Gateway URLs of the form
-        http://{gateway-host}:{gateway-port}/gateway/sandbox/webhdfs/v1
-
-6. Test the installation and configuration of your Gateway
-   Invoke the LISTSATUS operation on HDFS represented by your configured
-   WEBHDFS by using your web browser or curl:
-
-   curl -i -k -u guest:guest-password -X GET \
-     'https://localhost:8443/gateway/sandbox/webhdfs/v1/?op=LISTSTATUS'
-
-   The results of the above command should result in something to along the
-   lines of the output below.  The exact information returned is subject to
-   the content within HDFS in your Hadoop cluster.
-
-     HTTP/1.1 200 OK
-       Content-Type: application/json
-       Content-Length: 760
-       Server: Jetty(6.1.26)
-
-     {"FileStatuses":{"FileStatus":[
-     {"accessTime":0,"blockSize":0,"group":"hdfs","length":0,"modificationTime":1350595859762,"owner":"hdfs","pathSuffix":"apps","permission":"755","replication":0,"type":"DIRECTORY"},
-     {"accessTime":0,"blockSize":0,"group":"mapred","length":0,"modificationTime":1350595874024,"owner":"mapred","pathSuffix":"mapred","permission":"755","replication":0,"type":"DIRECTORY"},
-     {"accessTime":0,"blockSize":0,"group":"hdfs","length":0,"modificationTime":1350596040075,"owner":"hdfs","pathSuffix":"tmp","permission":"777","replication":0,"type":"DIRECTORY"},
-     {"accessTime":0,"blockSize":0,"group":"hdfs","length":0,"modificationTime":1350595857178,"owner":"hdfs","pathSuffix":"user","permission":"755","replication":0,"type":"DIRECTORY"}
-     ]}}
-
-   For additional information on WebHDFS, Templeton/WebHCat and Oozie
-   REST APIs, see the following URLs respectively:
-
-   http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html
-   http://hive.apache.org/docs/hcat_r0.5.0/rest.html
-   http://oozie.apache.org/docs/4.0.0/WebServicesAPI.html
-
-------------------------------------------------------------------------------
-Persisting the Master
-------------------------------------------------------------------------------
-The master secret is required to start the server. This secret is used to access secured artifacts by the gateway
-instance. Keystore, trust stores and credential stores are all protected with the master secret.
-
-You may persist the master secret by supplying the *-persist-master* switch at startup. This will result in a
-warning indicating that persisting the secret is less secure than providing it at startup. We do make some provisions in
-order to protect the persisted password.
-
-It is encrypted with AES 128 bit encryption and where possible the file permissions are set to only be accessible by
-the user that the gateway is running as.
-
-After persisting the secret, ensure that the file at config/security/master has the appropriate permissions set for your
-environment. This is probably the most important layer of defense for master secret. Do not assume that the encryption if
-sufficient protection.
-
-A specific user should be created to run the gateway this will protect a persisted master file.
-
-------------------------------------------------------------------------------
-Management of Security Artifacts
-------------------------------------------------------------------------------
-There are a number of artifacts that are used by the gateway in ensuring the security of wire level communications,
-access to protected resources and the encryption of sensitive data. These artifacts can be managed from outside of
-the gateway instances or generated and populated by the gateway instance itself.
-
-The following is a description of how this is coordinated with both standalone (development, demo, etc) gateway
-instances and instances as part of a cluster of gateways in mind.
-
-Upon start of the gateway server we:
-
-1. Look for an identity store at conf/security/keystores/gateway.jks. The identity store contains the certificate
-   and private key used to represent the identity of the server for SSL connections and signtature creation.
-	a. If there is no identity store we create one and generate a self-signed certificate for use in standalone/demo
-   	   mode. The certificate is stored with an alias of gateway-identity.
-   	b. If there is an identity store found than we ensure that it can be loaded using the provided master secret and
-   	   that there is an alias with called gateway-identity.
-2. Look for a credential store at conf/security/keystores/__gateway-credentials.jceks. This credential store is used
-   to store secrets/passwords that are used by the gateway. For instance, this is where the passphrase for accessing
-   the gateway-identity certificate is kept.
-   a. If there is no credential store found then we create one and populate it with a generated passphrase for the alias
-      gateway-identity-passphrase. This is coordinated with the population of the self-signed cert into the identity-store.
-   b. If a credential store is found then we ensure that it can be loaded using the provided master secret and that the
-      expected aliases have been populated with secrets.
-
-Upon deployment of a Hadoop cluster topology within the gateway we:
-
-1. Look for a credential store for the topology. For instance, we have a sample topology that gets deployed out of the box.
-   We look for conf/security/keystores/sample-credentials.jceks. This topology specific credential store is used for storing
-   secrets/passwords that are used for encrypting sensitive data with topology specific keys.
-   a. If no credential store is found for the topology being deployed then one is created for it. Population of the aliases
-      is delegated to the configured providers within the system that will require the use of a secret for a particular
-      task. They may programmatically set the value of the secret or choose to have the value for the specified alias
-      generated through the AliasService..
-   b. If a credential store is found then we ensure that it can be loaded with the provided master secret and the confgured
-      providers have the opportunity to ensure that the aliases are populated and if not to populate them.
-
- By leveraging the algorithm described above we can provide a window of opportunity for management of these artifacts in a
- number of ways.
-
- 1. Using a single gateway instance as a master instance the artifacts can be generated or placed into the expected location
-    and then replicated across all of the slave instances before startup.
- 2. Using an NFS mount as a central location for the artifacts would provide a single source of truth without the need to
-    replicate them over the network. Of course, NFS mounts have their own challenges.
-
-Summary of Secrets to be Managed:
-
-1. Master secret - the same for all gateway instances in a cluster of gateways
-2. All security related artifacts are protected with the master secret
-3. Secrets used by the gateway itself are stored within the gateway credential store and are the same across all gateway
-   instances in the cluster of gateways
-4. Secrets used by providers within cluster topologies are stored in topology specific credential stores and are the same
-   for the same topology across the cluster of gateway instances. However, they are specific to the topology - so secrets
-   for one hadoop cluster are different from those of another. This allows for failover from one gateway instance to another
-   even when encryption is being used while not allowing the compromise of one encryption key to expose the data for all clusters.
-
-NOTE: the SSL certificate will need special consideration depending on the type of certificate. Wildcard certs may be able
-to be shared across all gateway instances in a cluster. When certs are dedicated to specific machines the gateway identity
-store will not be able to be blindly replicated as hostname verification problems will ensue. Obviously, truststores will
-need to be taken into account as well.
-
-------------------------------------------------------------------------------
-Mapping Gateway URLs to Hadoop cluster URLs
-------------------------------------------------------------------------------
-The Gateway functions much like a reverse proxy.  As such it maintains a
-mapping of URLs that are exposed externally by the Gateway to URLs that are
-provided by the Hadoop cluster.  Examples of mappings for the WebHDFS and
-WebHCat are shown below.  These mapping are generated from the combination
-of the Gateway configuration file (i.e. {GATEWAY_HOME}/gateway-site.xml)
-and the cluster topology descriptors
-(e.g. {GATEWAY_HOME}/deployments/<cluster-name>.xml).
-
-  WebHDFS
-    Gateway: http://<gateway-host>:<gateway-port>/<gateway-path>/<cluster-name>/webhdfs
-    Cluster: http://<webhdfs-host>:50070/webhdfs
-  WebHCat (Templeton)
-    Gateway: http://<gateway-host>:<gateway-port>/<gateway-path>/<cluster-name>/webhcat
-    Cluster: http://<webhcat-host>:50111/templeton
-  Oozie
-    Gateway: http://<gateway-host>:<gateway-port>/<gateway-path>/<cluster-name>/oozie
-    Cluster: http://<templeton-host>:11000/oozie
-
-The values for <gateway-host>, <gateway-port>, <gateway-path> are provided via
-the Gateway configuration file (i.e. {GATEWAY_HOME}/gateway-site.xml).
-
-The value for <cluster-name> is derived from the name of the cluster topology
-descriptor (e.g. {GATEWAY_HOME}/deployments/<cluster-name>.xml).
-
-The value for <webhdfs-host> and <webhcat-host> is provided via the cluster
-topology descriptor (e.g. {GATEWAY_HOME}/deployments/<cluster-name>.xml).
-
-Note: The ports 50070, 50111 and 11000 are the defaults for WebHDFS,
-      WebHCat and Oozie respectively. Their values can also be provided via
-      the cluster topology descriptor if your Hadoop cluster uses different
-      ports.
-
-------------------------------------------------------------------------------
-Usage Examples
-------------------------------------------------------------------------------
-Please see the Apache Knox Gateway website for detailed examples.
-http://knox.incubator.apache.org/examples.html
-
-------------------------------------------------------------------------------
-Enabling logging
-------------------------------------------------------------------------------
-If necessary you can enable additional logging by editing the log4j.properties
-file in the conf directory.  Changing the rootLogger value from ERROR to DEBUG
-will generate a large amount of debug logging.  A number of useful, more fine
-loggers are also provided in the file.
-

http://git-wip-us.apache.org/repos/asf/incubator-knox/blob/bd29a4c9/gateway-release/home/ISSUES
----------------------------------------------------------------------
diff --git a/gateway-release/home/ISSUES b/gateway-release/home/ISSUES
index 6f43c5d..185ab10 100644
--- a/gateway-release/home/ISSUES
+++ b/gateway-release/home/ISSUES
@@ -1,10 +1,9 @@
 ------------------------------------------------------------------------------
 Know Issues
 ------------------------------------------------------------------------------
-The Gateway cannot be be used against either EC2 cluster unless the gateway
-is deployed within the EC2.
 
-If the cluster deployment descriptors in {GATEWAY_HOME}/deployments are
-incorrect, the errors logged by the gateway are overly detailed and not
-diagnostic enough.
+Support for accessing Hive using JDBC via the Apache Knox Gateway will not
+be supported until Hive 0.12.0 is released.
 
+Support for accessing Kerberos secured Stargate/HBase will not be supported
+until a future release of Stargate/HBase
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-knox/blob/bd29a4c9/gateway-release/home/README
----------------------------------------------------------------------
diff --git a/gateway-release/home/README b/gateway-release/home/README
index fa02d17..b27cc8d 100644
--- a/gateway-release/home/README
+++ b/gateway-release/home/README
@@ -1,5 +1,5 @@
 ------------------------------------------------------------------------------
-README file for Apache Knox Gateway
+README file for the Apache Knox Gateway
 ------------------------------------------------------------------------------
 This distribution includes cryptographic software.  The country in 
 which you currently reside may have restrictions on the import, 
@@ -40,12 +40,11 @@ specific needs for authentication.
 
 HTTP BASIC authentication with identity being asserted to the rest of the
 cluster via Pseudo/Simple authentication will be demonstrated for security.
-
 In addition to Pseudo identity assertion Knox now provides trusted proxy user
 identity assertion for providing access to Kerberos secured Hadoop clusters.
 
 For API aggregation, the Gateway provides a central endpoint for access to
-WebHDFS, WebHCat/Templeton, Hive, Starbase/HBase, Ambari and Oozie APIs for
+WebHDFS, WebHCat/Templeton, Hive, Starbase/HBase and Oozie APIs for
 each cluster.
 
 Future releases will extend these capabilities with additional
@@ -65,28 +64,18 @@ Please see the ISSUES file.
 ------------------------------------------------------------------------------
 Installation
 ------------------------------------------------------------------------------
-Please see the INSTALL file or the Apache Knox Gateway website.
-https://cwiki.apache.org/confluence/display/KNOX/Getting+Started
+Please see the Apache Knox Gateway User's Guide for details.
+http://knox.incubator.apache.org/books/knox-incubating-0-3-0/knox-incubating-0-3-0.html#Getting+Started
 
 ------------------------------------------------------------------------------
 Examples
 ------------------------------------------------------------------------------
-Please see the Apache Knox Gateway website for detailed examples.
-https://cwiki.apache.org/confluence/display/KNOX/Examples
+Please see the Apache Knox Gateway User's Guide for detailed examples.
+http://knox.incubator.apache.org/books/knox-incubating-0-3-0/knox-incubating-0-3-0.html#Basic+Usage
+http://knox.incubator.apache.org/books/knox-incubating-0-3-0/knox-incubating-0-3-0.html#Service+Details
 
 ------------------------------------------------------------------------------
-Filing bugs
+Troubleshooting & Filing bugs
 ------------------------------------------------------------------------------
-We now have Jira setup for Knox. The following link will provide you with
-the information for accessing the Jira setup for Knox:
-http://knox.incubator.apache.org/issue-tracking.html
-
-Please provide the output from the following command in your filed Jira issue:
-
-    java -jar bin/gateway-${gateway-version}.jar -version
-
-in the Environment section.  Also include the version of Hadoop being used.
-
-Please feel free to email the Knox user list as well (user AT knox.incubator.apache.org)
-with a subject prefix of [BUG] describing the issue.
-
+Please see the Apache Knox Gateway User's Guide for detailed information.
+http://knox.incubator.apache.org/books/knox-incubating-0-3-0/knox-incubating-0-3-0.html#Troubleshooting