You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@knox.apache.org by km...@apache.org on 2014/11/13 20:05:05 UTC

svn commit: r1639455 - in /knox: site/ site/books/knox-0-4-0/ site/books/knox-0-5-0/ trunk/books/0.5.0/

Author: kminder
Date: Thu Nov 13 19:05:04 2014
New Revision: 1639455

URL: http://svn.apache.org/r1639455
Log:
KNOX-471: User guide needs update after trying example

Modified:
    knox/site/books/knox-0-4-0/deployment-overview.png
    knox/site/books/knox-0-4-0/deployment-provider.png
    knox/site/books/knox-0-4-0/deployment-service.png
    knox/site/books/knox-0-4-0/runtime-overview.png
    knox/site/books/knox-0-4-0/runtime-request-processing.png
    knox/site/books/knox-0-5-0/knox-0-5-0.html
    knox/site/index.html
    knox/site/issue-tracking.html
    knox/site/license.html
    knox/site/mail-lists.html
    knox/site/project-info.html
    knox/site/team-list.html
    knox/trunk/books/0.5.0/book_client-details.md
    knox/trunk/books/0.5.0/book_gateway-details.md
    knox/trunk/books/0.5.0/config.md
    knox/trunk/books/0.5.0/knox_cli.md
    knox/trunk/books/0.5.0/service_hbase.md

Modified: knox/site/books/knox-0-4-0/deployment-overview.png
URL: http://svn.apache.org/viewvc/knox/site/books/knox-0-4-0/deployment-overview.png?rev=1639455&r1=1639454&r2=1639455&view=diff
==============================================================================
Binary files - no diff available.

Modified: knox/site/books/knox-0-4-0/deployment-provider.png
URL: http://svn.apache.org/viewvc/knox/site/books/knox-0-4-0/deployment-provider.png?rev=1639455&r1=1639454&r2=1639455&view=diff
==============================================================================
Binary files - no diff available.

Modified: knox/site/books/knox-0-4-0/deployment-service.png
URL: http://svn.apache.org/viewvc/knox/site/books/knox-0-4-0/deployment-service.png?rev=1639455&r1=1639454&r2=1639455&view=diff
==============================================================================
Binary files - no diff available.

Modified: knox/site/books/knox-0-4-0/runtime-overview.png
URL: http://svn.apache.org/viewvc/knox/site/books/knox-0-4-0/runtime-overview.png?rev=1639455&r1=1639454&r2=1639455&view=diff
==============================================================================
Binary files - no diff available.

Modified: knox/site/books/knox-0-4-0/runtime-request-processing.png
URL: http://svn.apache.org/viewvc/knox/site/books/knox-0-4-0/runtime-request-processing.png?rev=1639455&r1=1639454&r2=1639455&view=diff
==============================================================================
Binary files - no diff available.

Modified: knox/site/books/knox-0-5-0/knox-0-5-0.html
URL: http://svn.apache.org/viewvc/knox/site/books/knox-0-5-0/knox-0-5-0.html?rev=1639455&r1=1639454&r2=1639455&view=diff
==============================================================================
--- knox/site/books/knox-0-5-0/knox-0-5-0.html (original)
+++ knox/site/books/knox-0-5-0/knox-0-5-0.html Thu Nov 13 19:05:04 2014
@@ -335,7 +335,7 @@ https://{gateway-host}:{gateway-port}/{g
     <li>Gateway: <code>jdbc:hive2://{gateway-host}:{gateway-port}/;ssl=true;sslTrustStore={gateway-trust-store-path};trustStorePassword={gateway-trust-store-password}?hive.server2.transport.mode=http;hive.server2.thrift.http.path={gateway-path}/{cluster-name}/hive</code></li>
     <li>Cluster: <code>http://{hive-host}:10001/cliservice</code></li>
   </ul></li>
-</ul><p>The values for <code>{gateway-host}</code>, <code>{gateway-port}</code>, <code>{gateway-path}</code> are provided via the gateway configuration file (i.e. <code>{GATEWAY_HOME}/conf/gateway-site.xml</code>).</p><p>The value for <code>{cluster-name}</code> is derived from the file name of the cluster topology descriptor (e.g. <code>{GATEWAY_HOME}/deployments/{cluster-name}.xml</code>).</p><p>The value for <code>{webhdfs-host}</code>, <code>{webhcat-host}</code>, <code>{oozie-host}</code>, <code>{hbase-host}</code> and <code>{hive-host}</code> are provided via the cluster topology descriptor (e.g. <code>{GATEWAY_HOME}/deployments/{cluster-name}.xml</code>).</p><p>Note: The ports 50070, 50111, 11000, 60080 (default 8080) and 10001 are the defaults for WebHDFS, WebHCat, Oozie, Stargate/HBase and Hive respectively. Their values can also be provided via the cluster topology descriptor if your Hadoop cluster uses different ports.</p><h3><a id="Configuration"></a>Configuration</h3><h
 4><a id="Topology+Descriptors"></a>Topology Descriptors</h4><p>The topology descriptor files provide the gateway with per-cluster configuration information. This includes configuration for both the providers within the gateway and the services within the Hadoop cluster. These files are located in <code>{GATEWAY_HOME}/deployments</code>. The general outline of this document looks like this.</p>
+</ul><p>The values for <code>{gateway-host}</code>, <code>{gateway-port}</code>, <code>{gateway-path}</code> are provided via the gateway configuration file (i.e. <code>{GATEWAY_HOME}/conf/gateway-site.xml</code>).</p><p>The value for <code>{cluster-name}</code> is derived from the file name of the cluster topology descriptor (e.g. <code>{GATEWAY_HOME}/deployments/{cluster-name}.xml</code>).</p><p>The value for <code>{webhdfs-host}</code>, <code>{webhcat-host}</code>, <code>{oozie-host}</code>, <code>{hbase-host}</code> and <code>{hive-host}</code> are provided via the cluster topology descriptor (e.g. <code>{GATEWAY_HOME}/conf/topologies/{cluster-name}.xml</code>).</p><p>Note: The ports 50070, 50111, 11000, 60080 (default 8080) and 10001 are the defaults for WebHDFS, WebHCat, Oozie, Stargate/HBase and Hive respectively. Their values can also be provided via the cluster topology descriptor if your Hadoop cluster uses different ports.</p><h3><a id="Configuration"></a>Configuration</h
 3><h4><a id="Topology+Descriptors"></a>Topology Descriptors</h4><p>The topology descriptor files provide the gateway with per-cluster configuration information. This includes configuration for both the providers within the gateway and the services within the Hadoop cluster. These files are located in <code>{GATEWAY_HOME}/conf/topologies</code>. The general outline of this document looks like this.</p>
 <pre><code>&lt;topology&gt;
     &lt;gateway&gt;
         &lt;provider&gt;
@@ -538,7 +538,7 @@ ip-10-39-107-209.ec2.internal
       <td>name of the host to be used in the self-signed certificate. This allows multi-host deployments to specify the proper hostnames for hostname verification to succeed on the client side of the SSL connection. The default is “localhost”.</td>
     </tr>
   </tbody>
-</table><h4><a id="Topology+Redeploy"></a>Topology Redeploy</h4><h4><a id="redeploy+[--cluster+c]"></a>redeploy [&ndash;cluster c]</h4><p>Redeploys one or all of the gateway&rsquo;s clusters (a.k.a topologies).</p><h3><a id="Admin+API"></a>Admin API</h3><p>Access to the administrator functions of Knox are provided by the Admin REST API.</p><h4><a id="Admin+API+URL"></a>Admin API URL</h4><p>The URL mapping for the Knox Admin API is simple:</p>
+</table><h4><a id="Topology+Redeploy"></a>Topology Redeploy</h4><h5><a id="knoxcli.sh+redeploy+[--cluster+c]"></a>knoxcli.sh redeploy [&ndash;cluster c]</h5><p>Redeploys one or all of the gateway&rsquo;s clusters (a.k.a topologies).</p><h3><a id="Admin+API"></a>Admin API</h3><p>Access to the administrator functions of Knox are provided by the Admin REST API.</p><h4><a id="Admin+API+URL"></a>Admin API URL</h4><p>The URL mapping for the Knox Admin API is simple:</p>
 <table>
   <tbody>
     <tr>
@@ -1581,19 +1581,19 @@ println json.FileStatuses.FileStatus.pat
 session.shutdown()
 exit
 </code></pre><p>Notice the <code>Hdfs.rm</code> command. This is included simply to ensure that the script can be rerun. Without this an error would result the second time it is run.</p><h3><a id="Futures"></a>Futures</h3><p>The DSL supports the ability to invoke commands asynchronously via the later() invocation method. The object returned from the later() method is a java.util.concurrent.Future parametrized with the response type of the command. This is an example of how to asynchronously put a file to HDFS.</p>
-<pre><code>future = Hdfs.put(session).file(&quot;README&quot;).to(&quot;tmp/example/README&quot;).later()
+<pre><code>future = Hdfs.put(session).file(&quot;README&quot;).to(&quot;/tmp/example/README&quot;).later()
 println future.get().statusCode
 </code></pre><p>The future.get() method will block until the asynchronous command is complete. To illustrate the usefulness of this however multiple concurrent commands are required.</p>
-<pre><code>readmeFuture = Hdfs.put(session).file(&quot;README&quot;).to(&quot;tmp/example/README&quot;).later()
-licenseFuture = Hdfs.put(session).file(&quot;LICENSE&quot;).to(&quot;tmp/example/LICENSE&quot;).later()
+<pre><code>readmeFuture = Hdfs.put(session).file(&quot;README&quot;).to(&quot;/tmp/example/README&quot;).later()
+licenseFuture = Hdfs.put(session).file(&quot;LICENSE&quot;).to(&quot;/tmp/example/LICENSE&quot;).later()
 session.waitFor( readmeFuture, licenseFuture )
 println readmeFuture.get().statusCode
 println licenseFuture.get().statusCode
 </code></pre><p>The session.waitFor() method will wait for one or more asynchronous commands to complete.</p><h3><a id="Closures"></a>Closures</h3><p>Futures alone only provide asynchronous invocation of the command. What if some processing should also occur asynchronously once the command is complete. Support for this is provided by closures. Closures are blocks of code that are passed into the later() invocation method. In Groovy these are contained within {} immediately after a method. These blocks of code are executed once the asynchronous command is complete.</p>
-<pre><code>Hdfs.put(session).file(&quot;README&quot;).to(&quot;tmp/example/README&quot;).later(){ println it.statusCode }
+<pre><code>Hdfs.put(session).file(&quot;README&quot;).to(&quot;/tmp/example/README&quot;).later(){ println it.statusCode }
 </code></pre><p>In this example the put() command is executed on a separate thread and once complete the <code>println it.statusCode</code> block is executed on that thread. The it variable is automatically populated by Groovy and is a reference to the result that is returned from the future or now() method. The future example above can be rewritten to illustrate the use of closures.</p>
-<pre><code>readmeFuture = Hdfs.put(session).file(&quot;README&quot;).to(&quot;tmp/example/README&quot;).later() { println it.statusCode }
-licenseFuture = Hdfs.put(session).file(&quot;LICENSE&quot;).to(&quot;tmp/example/LICENSE&quot;).later() { println it.statusCode }
+<pre><code>readmeFuture = Hdfs.put(session).file(&quot;README&quot;).to(&quot;/tmp/example/README&quot;).later() { println it.statusCode }
+licenseFuture = Hdfs.put(session).file(&quot;LICENSE&quot;).to(&quot;/tmp/example/LICENSE&quot;).later() { println it.statusCode }
 session.waitFor( readmeFuture, licenseFuture )
 </code></pre><p>Again, the session.waitFor() method will wait for one or more asynchronous commands to complete.</p><h3><a id="Constructs"></a>Constructs</h3><p>In order to understand the DSL there are three primary constructs that need to be understood.</p><h4><a id="Session"></a>Session</h4><p>This construct encapsulates the client side session state that will be shared between all command invocations. In particular it will simplify the management of any tokens that need to be presented with each command invocation. It also manages a thread pool that is used by all asynchronous commands which is why it is important to call one of the shutdown methods.</p><p>The syntax associated with this is expected to change we expect that credentials will not need to be provided to the gateway. Rather it is expected that some form of access token will be used to initialize the session.</p><h4><a id="Services"></a>Services</h4><p>Services are the primary extension point for adding new suites of 
 commands. The current built in examples are: Hdfs, Job and Workflow. The desire for extensibility is the reason for the slightly awkward Hdfs.ls(session) syntax. Certainly something more like <code>session.hdfs().ls()</code> would have been preferred but this would prevent adding new commands easily. At a minimum it would result in extension commands with a different syntax from the &ldquo;built-in&rdquo; commands.</p><p>The service objects essentially function as a factory for a suite of commands.</p><h4><a id="Commands"></a>Commands</h4><p>Commands provide the behavior of the DSL. They typically follow a Fluent interface style in order to allow for single line commands. There are really three parts to each command: Request, Invocation, Response</p><h4><a id="Request"></a>Request</h4><p>The request is populated by all of the methods following the &ldquo;verb&rdquo; method and the &ldquo;invoke&rdquo; method. For example in <code>Hdfs.rm(session).ls(dir).now()</code> the request is 
 populated between the &ldquo;verb&rdquo; method <code>rm()</code> and the &ldquo;invoke&rdquo; method <code>now()</code>.</p><h4><a id="Invocation"></a>Invocation</h4><p>The invocation method controls how the request is invoked. Currently supported synchronous and asynchronous invocation. The now() method executes the request and returns the result immediately. The later() method submits the request to be executed later and returns a future from which the result can be retrieved. In addition later() invocation method can optionally be provided a closure to execute when the request is complete. See the Futures and Closures sections below for additional detail and examples.</p><h4><a id="Response"></a>Response</h4><p>The response contains the results of the invocation of the request. In most cases the response is a thin wrapper over the HTTP response. In fact many commands will share a single BasicResponse type that only provides a few simple methods.</p>
 <pre><code>public int getStatusCode()
@@ -2239,8 +2239,8 @@ curl -i -k -u guest:guest-password -X DE
 </table><h4><a id="HBase+Examples"></a>HBase Examples</h4><p>The examples below illustrate the set of basic operations with HBase instance using Stargate REST API. Use following link to get more more details about HBase/Stargate API: <a href="http://wiki.apache.org/hadoop/Hbase/Stargate">http://wiki.apache.org/hadoop/Hbase/Stargate</a>.</p><p>Note: Some HBase examples may not work due to enabled <a href="https://hbase.apache.org/book/hbase.accesscontrol.configuration.html">Access Control</a>. User may not be granted for performing operations in samples. In order to check if Access Control is configured in the HBase instance verify hbase-site.xml for a presence of <code>org.apache.hadoop.hbase.security.access.AccessController</code> in <code>hbase.coprocessor.master.classes</code> and <code>hbase.coprocessor.region.classes</code> properties.<br/>To grant the Read, Write, Create permissions to <code>guest</code> user execute the following command:</p>
 <pre><code>echo grant &#39;guest&#39;, &#39;RWC&#39; | hbase shell
 </code></pre><p>If you are using a cluster secured with Kerberos you will need to have used <code>kinit</code> to authenticate to the KDC </p><h4><a id="HBase+Stargate+Setup"></a>HBase Stargate Setup</h4><h4><a id="Launch+Stargate"></a>Launch Stargate</h4><p>The command below launches the Stargate daemon on port 60080</p>
-<pre><code>sudo /usr/lib/hbase/bin/hbase-daemon.sh start rest -p 60080
-</code></pre><p>Port 60080 is used because it was specified in sample Hadoop cluster deployment <code>{GATEWAY_HOME}/deployments/sandbox.xml</code>.</p><h4><a id="Configure+Sandbox+port+mapping+for+VirtualBox"></a>Configure Sandbox port mapping for VirtualBox</h4>
+<pre><code>sudo {HBASE_BIN}/hbase-daemon.sh start rest -p 60080
+</code></pre><p>Where {HBASE_BIN} is /usr/hdp/current/hbase-master/bin/ in the case of a HDP install.</p><p>Port 60080 is used because it was specified in sample Hadoop cluster deployment <code>{GATEWAY_HOME}/conf/topologies/sandbox.xml</code>.</p><h4><a id="Configure+Sandbox+port+mapping+for+VirtualBox"></a>Configure Sandbox port mapping for VirtualBox</h4>
 <ol>
   <li>Select the VM</li>
   <li>Select menu Machine&gt;Settings&hellip;</li>
@@ -2250,17 +2250,17 @@ curl -i -k -u guest:guest-password -X DE
   <li>Press Plus button to insert new rule: Name=Stargate, Host Port=60080, Guest Port=60080</li>
   <li>Press OK to close the rule window</li>
   <li>Press OK to Network window save the changes</li>
-</ol><p>60080 pot is used because it was specified in sample Hadoop cluster deployment <code>{GATEWAY_HOME}/deployments/sandbox.xml</code>.</p><h4><a id="HBase+Restart"></a>HBase Restart</h4><p>If it becomes necessary to restart HBase you can log into the hosts running HBase and use these steps.</p>
-<pre><code>sudo /usr/lib/hbase/bin/hbase-daemon.sh stop rest
-sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh stop regionserver
-sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh stop master
-sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh stop zookeeper
-
-sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh start regionserver
-sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh start master
-sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh start zookeeper
-sudo /usr/lib/hbase/bin/hbase-daemon.sh start rest -p 60080
-</code></pre><h4><a id="HBase/Stargate+client+DSL"></a>HBase/Stargate client DSL</h4><p>For more details about client DSL usage please follow this [page|https://cwiki.apache.org/confluence/display/KNOX/Client+Usage].</p><h4><a id="systemVersion()+-+Query+Software+Version."></a>systemVersion() - Query Software Version.</h4>
+</ol><p>60080 port is used because it was specified in sample Hadoop cluster deployment <code>{GATEWAY_HOME}/conf/topologies/sandbox.xml</code>.</p><h4><a id="HBase+Restart"></a>HBase Restart</h4><p>If it becomes necessary to restart HBase you can log into the hosts running HBase and use these steps.</p>
+<pre><code>sudo {HBASE_BIN}hbase-daemon.sh stop rest
+sudo -u hbase {HBASE_BIN}/hbase-daemon.sh stop regionserver
+sudo -u hbase {HBASE_BIN}/hbase-daemon.sh stop master
+sudo -u hbase {HBASE_BIN}/hbase-daemon.sh stop zookeeper
+
+sudo -u hbase {HBASE_BIN}/hbase-daemon.sh start regionserver
+sudo -u hbase {HBASE_BIN}/hbase-daemon.sh start master
+sudo -u hbase {HBASE_BIN}/hbase-daemon.sh start zookeeper
+sudo {HBASE_BIN}/hbase-daemon.sh start rest -p 60080
+</code></pre><p>Where {HBASE_BIN} is /usr/hdp/current/hbase-master/bin/ in the case of a HDP install.</p><h4><a id="HBase/Stargate+client+DSL"></a>HBase/Stargate client DSL</h4><p>For more details about client DSL usage please follow this [page|https://cwiki.apache.org/confluence/display/KNOX/Client+Usage].</p><p>After launching the shell, execute the following command to be able to use the snippets below. <code>import org.apache.hadoop.gateway.shell.hbase.HBase;</code></p><h4><a id="systemVersion()+-+Query+Software+Version."></a>systemVersion() - Query Software Version.</h4>
 <ul>
   <li>Request
   <ul>

Modified: knox/site/index.html
URL: http://svn.apache.org/viewvc/knox/site/index.html?rev=1639455&r1=1639454&r2=1639455&view=diff
==============================================================================
--- knox/site/index.html (original)
+++ knox/site/index.html Thu Nov 13 19:05:04 2014
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.6 at 2014-11-11 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.6 at 2014-11-13 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20141111" />
+    <meta name="Date-Revision-yyyymmdd" content="20141113" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2014-11-11</span>
+                &nbsp;| <span id="publishDate">Last Published: 2014-11-13</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: knox/site/issue-tracking.html
URL: http://svn.apache.org/viewvc/knox/site/issue-tracking.html?rev=1639455&r1=1639454&r2=1639455&view=diff
==============================================================================
--- knox/site/issue-tracking.html (original)
+++ knox/site/issue-tracking.html Thu Nov 13 19:05:04 2014
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.6 at 2014-11-11 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.6 at 2014-11-13 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20141111" />
+    <meta name="Date-Revision-yyyymmdd" content="20141113" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2014-11-11</span>
+                &nbsp;| <span id="publishDate">Last Published: 2014-11-13</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: knox/site/license.html
URL: http://svn.apache.org/viewvc/knox/site/license.html?rev=1639455&r1=1639454&r2=1639455&view=diff
==============================================================================
--- knox/site/license.html (original)
+++ knox/site/license.html Thu Nov 13 19:05:04 2014
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.6 at 2014-11-11 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.6 at 2014-11-13 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20141111" />
+    <meta name="Date-Revision-yyyymmdd" content="20141113" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2014-11-11</span>
+                &nbsp;| <span id="publishDate">Last Published: 2014-11-13</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: knox/site/mail-lists.html
URL: http://svn.apache.org/viewvc/knox/site/mail-lists.html?rev=1639455&r1=1639454&r2=1639455&view=diff
==============================================================================
--- knox/site/mail-lists.html (original)
+++ knox/site/mail-lists.html Thu Nov 13 19:05:04 2014
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.6 at 2014-11-11 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.6 at 2014-11-13 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20141111" />
+    <meta name="Date-Revision-yyyymmdd" content="20141113" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2014-11-11</span>
+                &nbsp;| <span id="publishDate">Last Published: 2014-11-13</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: knox/site/project-info.html
URL: http://svn.apache.org/viewvc/knox/site/project-info.html?rev=1639455&r1=1639454&r2=1639455&view=diff
==============================================================================
--- knox/site/project-info.html (original)
+++ knox/site/project-info.html Thu Nov 13 19:05:04 2014
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.6 at 2014-11-11 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.6 at 2014-11-13 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20141111" />
+    <meta name="Date-Revision-yyyymmdd" content="20141113" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2014-11-11</span>
+                &nbsp;| <span id="publishDate">Last Published: 2014-11-13</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: knox/site/team-list.html
URL: http://svn.apache.org/viewvc/knox/site/team-list.html?rev=1639455&r1=1639454&r2=1639455&view=diff
==============================================================================
--- knox/site/team-list.html (original)
+++ knox/site/team-list.html Thu Nov 13 19:05:04 2014
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.6 at 2014-11-11 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.6 at 2014-11-13 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20141111" />
+    <meta name="Date-Revision-yyyymmdd" content="20141113" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2014-11-11</span>
+                &nbsp;| <span id="publishDate">Last Published: 2014-11-13</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: knox/trunk/books/0.5.0/book_client-details.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.5.0/book_client-details.md?rev=1639455&r1=1639454&r2=1639455&view=diff
==============================================================================
--- knox/trunk/books/0.5.0/book_client-details.md (original)
+++ knox/trunk/books/0.5.0/book_client-details.md Thu Nov 13 19:05:04 2014
@@ -172,14 +172,14 @@ The DSL supports the ability to invoke c
 The object returned from the later() method is a java.util.concurrent.Future parametrized with the response type of the command.
 This is an example of how to asynchronously put a file to HDFS.
 
-    future = Hdfs.put(session).file("README").to("tmp/example/README").later()
+    future = Hdfs.put(session).file("README").to("/tmp/example/README").later()
     println future.get().statusCode
 
 The future.get() method will block until the asynchronous command is complete.
 To illustrate the usefulness of this however multiple concurrent commands are required.
 
-    readmeFuture = Hdfs.put(session).file("README").to("tmp/example/README").later()
-    licenseFuture = Hdfs.put(session).file("LICENSE").to("tmp/example/LICENSE").later()
+    readmeFuture = Hdfs.put(session).file("README").to("/tmp/example/README").later()
+    licenseFuture = Hdfs.put(session).file("LICENSE").to("/tmp/example/LICENSE").later()
     session.waitFor( readmeFuture, licenseFuture )
     println readmeFuture.get().statusCode
     println licenseFuture.get().statusCode
@@ -196,14 +196,14 @@ Closures are blocks of code that are pas
 In Groovy these are contained within {} immediately after a method.
 These blocks of code are executed once the asynchronous command is complete.
 
-    Hdfs.put(session).file("README").to("tmp/example/README").later(){ println it.statusCode }
+    Hdfs.put(session).file("README").to("/tmp/example/README").later(){ println it.statusCode }
 
 In this example the put() command is executed on a separate thread and once complete the `println it.statusCode` block is executed on that thread.
 The it variable is automatically populated by Groovy and is a reference to the result that is returned from the future or now() method.
 The future example above can be rewritten to illustrate the use of closures.
 
-    readmeFuture = Hdfs.put(session).file("README").to("tmp/example/README").later() { println it.statusCode }
-    licenseFuture = Hdfs.put(session).file("LICENSE").to("tmp/example/LICENSE").later() { println it.statusCode }
+    readmeFuture = Hdfs.put(session).file("README").to("/tmp/example/README").later() { println it.statusCode }
+    licenseFuture = Hdfs.put(session).file("LICENSE").to("/tmp/example/LICENSE").later() { println it.statusCode }
     session.waitFor( readmeFuture, licenseFuture )
 
 Again, the session.waitFor() method will wait for one or more asynchronous commands to complete.

Modified: knox/trunk/books/0.5.0/book_gateway-details.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.5.0/book_gateway-details.md?rev=1639455&r1=1639454&r2=1639455&view=diff
==============================================================================
--- knox/trunk/books/0.5.0/book_gateway-details.md (original)
+++ knox/trunk/books/0.5.0/book_gateway-details.md Thu Nov 13 19:05:04 2014
@@ -70,7 +70,7 @@ The values for `{gateway-host}`, `{gatew
 
 The value for `{cluster-name}` is derived from the file name of the cluster topology descriptor (e.g. `{GATEWAY_HOME}/deployments/{cluster-name}.xml`).
 
-The value for `{webhdfs-host}`, `{webhcat-host}`, `{oozie-host}`, `{hbase-host}` and `{hive-host}` are provided via the cluster topology descriptor (e.g. `{GATEWAY_HOME}/deployments/{cluster-name}.xml`).
+The value for `{webhdfs-host}`, `{webhcat-host}`, `{oozie-host}`, `{hbase-host}` and `{hive-host}` are provided via the cluster topology descriptor (e.g. `{GATEWAY_HOME}/conf/topologies/{cluster-name}.xml`).
 
 Note: The ports 50070, 50111, 11000, 60080 (default 8080) and 10001 are the defaults for WebHDFS, WebHCat, Oozie, Stargate/HBase and Hive respectively.
 Their values can also be provided via the cluster topology descriptor if your Hadoop cluster uses different ports.

Modified: knox/trunk/books/0.5.0/config.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.5.0/config.md?rev=1639455&r1=1639454&r2=1639455&view=diff
==============================================================================
--- knox/trunk/books/0.5.0/config.md (original)
+++ knox/trunk/books/0.5.0/config.md Thu Nov 13 19:05:04 2014
@@ -21,7 +21,7 @@
 
 The topology descriptor files provide the gateway with per-cluster configuration information.
 This includes configuration for both the providers within the gateway and the services within the Hadoop cluster.
-These files are located in `{GATEWAY_HOME}/deployments`.
+These files are located in `{GATEWAY_HOME}/conf/topologies`.
 The general outline of this document looks like this.
 
     <topology>

Modified: knox/trunk/books/0.5.0/knox_cli.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.5.0/knox_cli.md?rev=1639455&r1=1639454&r2=1639455&view=diff
==============================================================================
--- knox/trunk/books/0.5.0/knox_cli.md (original)
+++ knox/trunk/books/0.5.0/knox_cli.md Thu Nov 13 19:05:04 2014
@@ -75,6 +75,6 @@ argument | description
 --hostname	|	name of the host to be used in the self-signed certificate. This allows multi-host deployments to specify the proper hostnames for hostname verification to succeed on the client side of the SSL connection. The default is “localhost”.
 
 #### Topology Redeploy ####
-#### redeploy [--cluster c] ####
+##### knoxcli.sh redeploy [--cluster c] #####
 Redeploys one or all of the gateway's clusters (a.k.a topologies).
 

Modified: knox/trunk/books/0.5.0/service_hbase.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.5.0/service_hbase.md?rev=1639455&r1=1639454&r2=1639455&view=diff
==============================================================================
--- knox/trunk/books/0.5.0/service_hbase.md (original)
+++ knox/trunk/books/0.5.0/service_hbase.md Thu Nov 13 19:05:04 2014
@@ -44,9 +44,11 @@ If you are using a cluster secured with 
 
 The command below launches the Stargate daemon on port 60080
 
-    sudo /usr/lib/hbase/bin/hbase-daemon.sh start rest -p 60080
+    sudo {HBASE_BIN}/hbase-daemon.sh start rest -p 60080
 
-Port 60080 is used because it was specified in sample Hadoop cluster deployment `{GATEWAY_HOME}/deployments/sandbox.xml`.
+Where {HBASE_BIN} is /usr/hdp/current/hbase-master/bin/ in the case of a HDP install.
+
+Port 60080 is used because it was specified in sample Hadoop cluster deployment `{GATEWAY_HOME}/conf/topologies/sandbox.xml`.
 
 #### Configure Sandbox port mapping for VirtualBox ####
 
@@ -59,25 +61,30 @@ Port 60080 is used because it was specif
 7. Press OK to close the rule window
 8. Press OK to Network window save the changes
 
-60080 pot is used because it was specified in sample Hadoop cluster deployment `{GATEWAY_HOME}/deployments/sandbox.xml`.
+60080 port is used because it was specified in sample Hadoop cluster deployment `{GATEWAY_HOME}/conf/topologies/sandbox.xml`.
 
 #### HBase Restart ####
 
 If it becomes necessary to restart HBase you can log into the hosts running HBase and use these steps.
 
-    sudo /usr/lib/hbase/bin/hbase-daemon.sh stop rest
-    sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh stop regionserver
-    sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh stop master
-    sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh stop zookeeper
-
-    sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh start regionserver
-    sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh start master
-    sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh start zookeeper
-    sudo /usr/lib/hbase/bin/hbase-daemon.sh start rest -p 60080
+    sudo {HBASE_BIN}hbase-daemon.sh stop rest
+    sudo -u hbase {HBASE_BIN}/hbase-daemon.sh stop regionserver
+    sudo -u hbase {HBASE_BIN}/hbase-daemon.sh stop master
+    sudo -u hbase {HBASE_BIN}/hbase-daemon.sh stop zookeeper
+
+    sudo -u hbase {HBASE_BIN}/hbase-daemon.sh start regionserver
+    sudo -u hbase {HBASE_BIN}/hbase-daemon.sh start master
+    sudo -u hbase {HBASE_BIN}/hbase-daemon.sh start zookeeper
+    sudo {HBASE_BIN}/hbase-daemon.sh start rest -p 60080
 
+Where {HBASE_BIN} is /usr/hdp/current/hbase-master/bin/ in the case of a HDP install.
+ 
 #### HBase/Stargate client DSL ####
 
 For more details about client DSL usage please follow this [page|https://cwiki.apache.org/confluence/display/KNOX/Client+Usage].
+
+After launching the shell, execute the following command to be able to use the snippets below.
+`import org.apache.hadoop.gateway.shell.hbase.HBase;`
  
 #### systemVersion() - Query Software Version.