You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@knox.apache.org by km...@apache.org on 2013/10/07 14:59:00 UTC

svn commit: r1529828 - in /incubator/knox: site/ site/books/knox-incubating-0-3-0/ trunk/books/0.3.0/

Author: kminder
Date: Mon Oct  7 12:58:59 2013
New Revision: 1529828

URL: http://svn.apache.org/r1529828
Log:
Switch references to bob to guest.

Modified:
    incubator/knox/site/books/knox-incubating-0-3-0/knox-incubating-0-3-0.html
    incubator/knox/site/index.html
    incubator/knox/site/issue-tracking.html
    incubator/knox/site/license.html
    incubator/knox/site/mail-lists.html
    incubator/knox/site/project-info.html
    incubator/knox/site/team-list.html
    incubator/knox/trunk/books/0.3.0/book_client-details.md
    incubator/knox/trunk/books/0.3.0/book_getting-started.md
    incubator/knox/trunk/books/0.3.0/config_authz.md
    incubator/knox/trunk/books/0.3.0/config_id_assertion.md
    incubator/knox/trunk/books/0.3.0/service_oozie.md

Modified: incubator/knox/site/books/knox-incubating-0-3-0/knox-incubating-0-3-0.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/books/knox-incubating-0-3-0/knox-incubating-0-3-0.html?rev=1529828&r1=1529827&r2=1529828&view=diff
==============================================================================
--- incubator/knox/site/books/knox-incubating-0-3-0/knox-incubating-0-3-0.html (original)
+++ incubator/knox/site/books/knox-incubating-0-3-0/knox-incubating-0-3-0.html Mon Oct  7 12:58:59 2013
@@ -243,7 +243,7 @@
 </code></pre><p>If for some reason the gateway is stopped other than by using the command above you may need to clear the tracking PID.</p>
 <pre><code>bin/gateway.sh clean
 </code></pre><p><strong>CAUTION: This command will also clear any log output in /var/log/knox so use this with caution.</strong></p><h5><a id="4.+Configure+the+Gateway+with+the+topology+of+your+Hadoop+cluster"></a>4. Configure the Gateway with the topology of your Hadoop cluster</h5><p>Edit the file <code>{GATEWAY_HOME}/deployments/sandbox.xml</code></p><p>Change the host and port in the urls of the <code>&lt;service&gt;</code> elements for WEBHDFS, WEBHCAT, OOZIE, WEBHBASE and HIVE services to match your Hadoop cluster deployment.</p><p>The default configuration contains the LDAP URL for a LDAP server. By default that file is configured to access the demo ApacheDS based LDAP server and its default configuration. The ApacheDS based LDAP server listens on port 33389 by default. Optionally, you can change the LDAP URL for the LDAP server to be used for authentication. This is set via the <code>main.ldapRealm.contextFactory.url</code> property in the <code>&lt;gateway&gt;&lt;provider&g
 t;&lt;authentication&gt;</code> section. If you use an LDAP system other than the demo LDAP server you may need to change additional configuration as well.</p><p>Save the file. The directory <code>{GATEWAY_HOME}/deployments</code> is monitored by the gateway server. When a new or changed cluster topology descriptor is detected, it will provision the endpoints for the services described in the topology descriptor. Note that the name of the file excluding the extension is also used as the path for that cluster in the URL. For example the <code>sandbox.xml</code> file will result in gateway URLs of the form <code>http://{gateway-host}:{gateway-port}/gateway/sandbox/webhdfs</code>.</p><h5><a id="5.+Test+the+installation"></a>5. Test the installation</h5><p>Invoke the LISTSATUS operation on WebHDFS via the gateway. This will return a directory listing of the root (i.e. /) directory of HDFS.</p>
-<pre><code>curl -i -k -u bob:bob-password -X GET \
+<pre><code>curl -i -k -u guest:guest-password -X GET \
     &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/?op=LISTSTATUS&#39;
 </code></pre><p>The results of the above command should result in something to along the lines of the output below. The exact information returned is subject to the content within HDFS in your Hadoop cluster. Successfully executing this command at a minimum proves that the gateway is properly configured to provide access to WebHDFS. It does not necessarily provide that any of the other services are correct configured to be accessible. To validate that see the sections for the individual services in <a href="#Service+Details">Service Details</a>.</p>
 <pre><code>HTTP/1.1 200 OK
@@ -512,14 +512,14 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
     &lt;enabled&gt;true&lt;/enabled&gt;
     &lt;param&gt;
         &lt;name&gt;principal.mapping&lt;/name&gt;
-        &lt;value&gt;bob=hdfs;&lt;/value&gt;
+        &lt;value&gt;guest=hdfs;&lt;/value&gt;
     &lt;/param&gt;
     &lt;param&gt;
         &lt;name&gt;group.principal.mapping&lt;/name&gt;
         &lt;value&gt;*=users;hdfs=admin&lt;/value&gt;
     &lt;/param&gt;
 &lt;/provider&gt;
-</code></pre><p>This configuration identifies the same identity assertion provider but does provide principal and group mapping rules. In this case, when a user is authenticated as &ldquo;bob&rdquo; his identity is actually asserted to the Hadoop cluster as &ldquo;hdfs&rdquo;. In addition, since there are group principal mappings defined, he will also be considered as a member of the groups &ldquo;users&rdquo; and &ldquo;admin&rdquo;. In this particular example the wildcard &quot;*&ldquo; is used to indicate that all authenticated users need to be considered members of the &rdquo;users&ldquo; group and that only the user &rdquo;hdfs&ldquo; is mapped to be a member of the &rdquo;admin&quot; group.</p><p><strong>NOTE: These group memberships are currently only meaningful for Service Level Authorization using the AclsAuthorization provider. The groups are not currently asserted to the Hadoop cluster at this time. See the Authorization section within this guide to see how this is used.<
 /strong></p><p>The principal mapping aspect of the identity assertion provider is important to understand in order to fully utilize the authorization features of this provider.</p><p>This feature allows us to map the authenticated principal to a runas or impersonated principal to be asserted to the Hadoop services in the backend.</p><p>When a principal mapping is defined that results in an impersonated principal being created the impersonated principal is then the effective principal.</p><p>If there is no mapping to another principal then the authenticated or primary principal is then the effective principal.</p><h4><a id="Principal+Mapping"></a>Principal Mapping</h4>
+</code></pre><p>This configuration identifies the same identity assertion provider but does provide principal and group mapping rules. In this case, when a user is authenticated as &ldquo;guest&rdquo; his identity is actually asserted to the Hadoop cluster as &ldquo;hdfs&rdquo;. In addition, since there are group principal mappings defined, he will also be considered as a member of the groups &ldquo;users&rdquo; and &ldquo;admin&rdquo;. In this particular example the wildcard &quot;*&ldquo; is used to indicate that all authenticated users need to be considered members of the &rdquo;users&ldquo; group and that only the user &rdquo;hdfs&ldquo; is mapped to be a member of the &rdquo;admin&quot; group.</p><p><strong>NOTE: These group memberships are currently only meaningful for Service Level Authorization using the AclsAuthorization provider. The groups are not currently asserted to the Hadoop cluster at this time. See the Authorization section within this guide to see how this is used
 .</strong></p><p>The principal mapping aspect of the identity assertion provider is important to understand in order to fully utilize the authorization features of this provider.</p><p>This feature allows us to map the authenticated principal to a runas or impersonated principal to be asserted to the Hadoop services in the backend.</p><p>When a principal mapping is defined that results in an impersonated principal being created the impersonated principal is then the effective principal.</p><p>If there is no mapping to another principal then the authenticated or primary principal is then the effective principal.</p><h4><a id="Principal+Mapping"></a>Principal Mapping</h4>
 <pre><code>&lt;param&gt;
     &lt;name&gt;principal.mapping&lt;/name&gt;
     &lt;value&gt;{primaryPrincipal}[,...]={impersonatedPrincipal}[;...]&lt;/value&gt;
@@ -527,12 +527,12 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
 </code></pre><p>For instance:</p>
 <pre><code>&lt;param&gt;
     &lt;name&gt;principal.mapping&lt;/name&gt;
-    &lt;value&gt;bob=hdfs&lt;/value&gt;
+    &lt;value&gt;guest=hdfs&lt;/value&gt;
 &lt;/param&gt;
 </code></pre><p>For multiple mappings:</p>
 <pre><code>&lt;param&gt;
     &lt;name&gt;principal.mapping&lt;/name&gt;
-    &lt;value&gt;bob,alice=hdfs;mary=alice2&lt;/value&gt;
+    &lt;value&gt;guest,alice=hdfs;mary=alice2&lt;/value&gt;
 &lt;/param&gt;
 </code></pre><h4><a id="Group+Principal+Mapping"></a>Group Principal Mapping</h4>
 <pre><code>&lt;param&gt;
@@ -547,7 +547,7 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
 </code></pre><p>this configuration indicates that all (*) authenticated users are members of the &ldquo;users&rdquo; group and that user &ldquo;hdfs&rdquo; is a member of the admin group. Group principal mapping has been added along with the authorization provider described in this document.</p><h3><a id="Authorization"></a>Authorization</h3><h4><a id="Service+Level+Authorization"></a>Service Level Authorization</h4><p>The Knox Gateway has an out-of-the-box authorization provider that allows administrators to restrict access to the individual services within a Hadoop cluster.</p><p>This provider utilizes a simple and familiar pattern of using ACLs to protect Hadoop resources by specifying users, groups and ip addresses that are permitted access.</p><p>Note: In the examples below {serviceName} represents a real service name (e.g. WEBHDFS) and would be replaced with these values in an actual configuration.</p><h5><a id="Usecases"></a>Usecases</h5><h6><a id="USECASE-1:+Restrict+access+
 to+specific+Hadoop+services+to+specific+Users"></a>USECASE-1: Restrict access to specific Hadoop services to specific Users</h6>
 <pre><code>&lt;param&gt;
     &lt;name&gt;{serviceName}.acl&lt;/name&gt;
-    &lt;value&gt;bob;*;*&lt;/value&gt;
+    &lt;value&gt;guest;*;*&lt;/value&gt;
 &lt;/param&gt;
 </code></pre><h6><a id="USECASE-2:+Restrict+access+to+specific+Hadoop+services+to+specific+Groups"></a>USECASE-2: Restrict access to specific Hadoop services to specific Groups</h6>
 <pre><code>&lt;param&gt;
@@ -566,7 +566,7 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
 &lt;/param&gt;
 &lt;param&gt;
     &lt;name&gt;{serviceName}.acl&lt;/name&gt;
-    &lt;value&gt;bob;admin;*&lt;/value&gt;
+    &lt;value&gt;guest;admin;*&lt;/value&gt;
 &lt;/param&gt;
 </code></pre><h6><a id="USECASE-5:+Restrict+access+to+specific+Hadoop+services+to+specific+Users+OR+users+from+specific+Remote+IPs"></a>USECASE-5: Restrict access to specific Hadoop services to specific Users OR users from specific Remote IPs</h6>
 <pre><code>&lt;param&gt;
@@ -575,7 +575,7 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
 &lt;/param&gt;
 &lt;param&gt;
     &lt;name&gt;{serviceName}.acl&lt;/name&gt;
-    &lt;value&gt;bob;*;127.0.0.1&lt;/value&gt;
+    &lt;value&gt;guest;*;127.0.0.1&lt;/value&gt;
 &lt;/param&gt;
 </code></pre><h6><a id="USECASE-6:+Restrict+access+to+specific+Hadoop+services+to+users+within+specific+Groups+OR+from+specific+Remote+IPs"></a>USECASE-6: Restrict access to specific Hadoop services to users within specific Groups OR from specific Remote IPs</h6>
 <pre><code>&lt;param&gt;
@@ -593,17 +593,17 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
 &lt;/param&gt;
 &lt;param&gt;
     &lt;name&gt;{serviceName}.acl&lt;/name&gt;
-    &lt;value&gt;bob;admin;127.0.0.1&lt;/value&gt;
+    &lt;value&gt;guest;admin;127.0.0.1&lt;/value&gt;
 &lt;/param&gt;
 </code></pre><h6><a id="USECASE-8:+Restrict+access+to+specific+Hadoop+services+to+specific+Users+AND+users+within+specific+Groups"></a>USECASE-8: Restrict access to specific Hadoop services to specific Users AND users within specific Groups</h6>
 <pre><code>&lt;param&gt;
     &lt;name&gt;{serviceName}.acl&lt;/name&gt;
-    &lt;value&gt;bob;admin;*&lt;/value&gt;
+    &lt;value&gt;guest;admin;*&lt;/value&gt;
 &lt;/param&gt;
 </code></pre><h6><a id="USECASE-9:+Restrict+access+to+specific+Hadoop+services+to+specific+Users+AND+users+from+specific+Remote+IPs"></a>USECASE-9: Restrict access to specific Hadoop services to specific Users AND users from specific Remote IPs</h6>
 <pre><code>&lt;param&gt;
     &lt;name&gt;{serviceName}.acl&lt;/name&gt;
-    &lt;value&gt;bob;*;127.0.0.1&lt;/value&gt;
+    &lt;value&gt;guest;*;127.0.0.1&lt;/value&gt;
 &lt;/param&gt;
 </code></pre><h6><a id="USECASE-10:+Restrict+access+to+specific+Hadoop+services+to+users+within+specific+Groups+AND+from+specific+Remote+IPs"></a>USECASE-10: Restrict access to specific Hadoop services to users within specific Groups AND from specific Remote IPs</h6>
 <pre><code>&lt;param&gt;
@@ -613,7 +613,7 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
 </code></pre><h6><a id="USECASE-11:+Restrict+access+to+specific+Hadoop+services+to+specific+Users+AND+users+within+specific+Groups+AND+from+specific+Remote+IPs"></a>USECASE-11: Restrict access to specific Hadoop services to specific Users AND users within specific Groups AND from specific Remote IPs</h6>
 <pre><code>&lt;param&gt;
     &lt;name&gt;{serviceName}.acl&lt;/name&gt;
-    &lt;value&gt;bob;admins;127.0.0.1&lt;/value&gt;
+    &lt;value&gt;guest;admins;127.0.0.1&lt;/value&gt;
 &lt;/param&gt;
 </code></pre><h4><a id="Configuration"></a>Configuration</h4><p>ACLs are bound to services within the topology descriptors by introducing the authorization provider with configuration like:</p>
 <pre><code>&lt;provider&gt;
@@ -654,7 +654,7 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
 </code></pre><p>this processing behavior requires that the effective user satisfy one of the parts of the ACL definition in order to be granted access. For instance:</p>
 <pre><code>&lt;param&gt;
     &lt;name&gt;webhdfs.acl&lt;/name&gt;
-    &lt;value&gt;hdfs,bob;admin;127.0.0.2,127.0.0.3&lt;/value&gt;
+    &lt;value&gt;hdfs,guest;admin;127.0.0.2,127.0.0.3&lt;/value&gt;
 &lt;/param&gt;
 </code></pre><p>You may also set the ACL processing mode at the top level for the topology. This essentially sets the default for the managed cluster. It may then be overridden at the service level as well.</p>
 <pre><code>&lt;param&gt;
@@ -663,7 +663,7 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
 &lt;/param&gt;
 </code></pre><p>this configuration indicates that ONE of the following must be satisfied to be granted access:</p>
 <ol>
-  <li>the user is &ldquo;hdfs&rdquo; or &ldquo;bob&rdquo; OR</li>
+  <li>the user is &ldquo;hdfs&rdquo; or &ldquo;guest&rdquo; OR</li>
   <li>the user is in &ldquo;admin&rdquo; group OR</li>
   <li>the request is coming from 127.0.0.2 or 127.0.0.3</li>
 </ol><h4><a id="Other+Related+Configuration"></a>Other Related Configuration</h4><p>The principal mapping aspect of the identity assertion provider is important to understand in order to fully utilize the authorization features of this provider.</p><p>This feature allows us to map the authenticated principal to a runas or impersonated principal to be asserted to the Hadoop services in the backend. When a principal mapping is defined that results in an impersonated principal being created the impersonated principal is then the effective principal. If there is no mapping to another principal then the authenticated or primary principal is then the effective principal. Principal mapping has actually been available in the identity assertion provider from the beginning of Knox and is documented fully in the Identity Assertion section of this guide.</p>
@@ -674,7 +674,7 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
 </code></pre><p>For instance:</p>
 <pre><code>&lt;param&gt;
     &lt;name&gt;principal.mapping&lt;/name&gt;
-    &lt;value&gt;bob=hdfs&lt;/value&gt;
+    &lt;value&gt;guest=hdfs&lt;/value&gt;
 &lt;/param&gt;
 </code></pre><p>In addition, we allow the administrator to map groups to effective principals. This is done through another param within the identity assertion provider:</p>
 <pre><code>&lt;param&gt;
@@ -720,7 +720,7 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
             &lt;enabled&gt;true&lt;/enabled&gt;
             &lt;param&gt;
                 &lt;name&gt;principal.mapping&lt;/name&gt;
-                &lt;value&gt;bob=hdfs;&lt;/value&gt;
+                &lt;value&gt;guest=hdfs;&lt;/value&gt;
             &lt;/param&gt;
             &lt;param&gt;
                 &lt;name&gt;group.principal.mapping&lt;/name&gt;
@@ -854,7 +854,7 @@ knox:000&gt; Hdfs.put( hadoop ).file( &q
 Status=201
 </code></pre><p>Notice that a different filename is used for the destination. Without this an error would have resulted. Of course the DSL also provides a command to list the contents of a directory.</p>
 <pre><code>knox:000&gt; println Hdfs.ls( hadoop ).dir( &quot;/tmp/example&quot; ).now().string
-{&quot;FileStatuses&quot;:{&quot;FileStatus&quot;:[{&quot;accessTime&quot;:1363711366977,&quot;blockSize&quot;:134217728,&quot;group&quot;:&quot;hdfs&quot;,&quot;length&quot;:19395,&quot;modificationTime&quot;:1363711366977,&quot;owner&quot;:&quot;bob&quot;,&quot;pathSuffix&quot;:&quot;README&quot;,&quot;permission&quot;:&quot;644&quot;,&quot;replication&quot;:1,&quot;type&quot;:&quot;FILE&quot;},{&quot;accessTime&quot;:1363711375617,&quot;blockSize&quot;:134217728,&quot;group&quot;:&quot;hdfs&quot;,&quot;length&quot;:19395,&quot;modificationTime&quot;:1363711375617,&quot;owner&quot;:&quot;bob&quot;,&quot;pathSuffix&quot;:&quot;README2&quot;,&quot;permission&quot;:&quot;644&quot;,&quot;replication&quot;:1,&quot;type&quot;:&quot;FILE&quot;}]}}
+{&quot;FileStatuses&quot;:{&quot;FileStatus&quot;:[{&quot;accessTime&quot;:1363711366977,&quot;blockSize&quot;:134217728,&quot;group&quot;:&quot;hdfs&quot;,&quot;length&quot;:19395,&quot;modificationTime&quot;:1363711366977,&quot;owner&quot;:&quot;guest&quot;,&quot;pathSuffix&quot;:&quot;README&quot;,&quot;permission&quot;:&quot;644&quot;,&quot;replication&quot;:1,&quot;type&quot;:&quot;FILE&quot;},{&quot;accessTime&quot;:1363711375617,&quot;blockSize&quot;:134217728,&quot;group&quot;:&quot;hdfs&quot;,&quot;length&quot;:19395,&quot;modificationTime&quot;:1363711375617,&quot;owner&quot;:&quot;guest&quot;,&quot;pathSuffix&quot;:&quot;README2&quot;,&quot;permission&quot;:&quot;644&quot;,&quot;replication&quot;:1,&quot;type&quot;:&quot;FILE&quot;}]}}
 </code></pre><p>It is a design decision of the DSL to not provide type safe classes for various request and response payloads. Doing so would provide an undesirable coupling between the DSL and the service implementation. It also would make adding new commands much more difficult. See the Groovy section below for a variety capabilities and tools for working with JSON and XML to make this easy. The example below shows the use of JsonSlurper and GPath to extract content from a JSON response.</p>
 <pre><code>knox:000&gt; import groovy.json.JsonSlurper
 knox:000&gt; text = Hdfs.ls( hadoop ).dir( &quot;/tmp/example&quot; ).now().string
@@ -874,8 +874,8 @@ import org.apache.hadoop.gateway.shell.h
 import groovy.json.JsonSlurper
 
 gateway = &quot;https://localhost:8443/gateway/sandbox&quot;
-username = &quot;bob&quot;
-password = &quot;bob-password&quot;
+username = &quot;guest&quot;
+password = &quot;guest-password&quot;
 dataFile = &quot;README&quot;
 
 hadoop = Hadoop.login( gateway, username, password )
@@ -1503,8 +1503,8 @@ import static java.util.concurrent.TimeU
 gateway = &quot;https://localhost:8443/gateway/sandbox&quot;
 jobTracker = &quot;sandbox:50300&quot;;
 nameNode = &quot;sandbox:8020&quot;;
-username = &quot;bob&quot;
-password = &quot;bob-password&quot;
+username = &quot;guest&quot;
+password = &quot;guest-password&quot;
 inputFile = &quot;LICENSE&quot;
 jarFile = &quot;samples/hadoop-examples.jar&quot;
 
@@ -1571,81 +1571,81 @@ println &quot;Shutdown &quot; + hadoop.s
 exit
 </code></pre><h4><a id="Example+#3:+WebHDFS+&+Templeton/WebHCat+via+cURL"></a>Example #3: WebHDFS &amp; Templeton/WebHCat via cURL</h4><p>The example below illustrates the sequence of curl commands that could be used to run a &ldquo;word count&rdquo; map reduce job. It utilizes the hadoop-examples.jar from a Hadoop install for running a simple word count job. A copy of that jar has been included in the samples directory for convenience. Take care to follow the instructions below for steps 4/5 and 6/7 where the Location header returned by the call to the NameNode is copied for use with the call to the DataNode that follows it. These replacement values are identified with { } markup.</p>
 <pre><code># 0. Optionally cleanup the test directory in case a previous example was run without cleaning up.
-curl -i -k -u bob:bob-password -X DELETE \
+curl -i -k -u guest:guest-password -X DELETE \
     &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test?op=DELETE&amp;recursive=true&#39;
 
 # 1. Create a test input directory /tmp/test/input
-curl -i -k -u bob:bob-password -X PUT \
+curl -i -k -u guest:guest-password -X PUT \
     &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/input?op=MKDIRS&#39;
 
 # 2. Create a test output directory /tmp/test/input
-curl -i -k -u bob:bob-password -X PUT \
+curl -i -k -u guest:guest-password -X PUT \
     &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/output?op=MKDIRS&#39;
 
 # 3. Create the inode for hadoop-examples.jar in /tmp/test
-curl -i -k -u bob:bob-password -X PUT \
+curl -i -k -u guest:guest-password -X PUT \
     &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/hadoop-examples.jar?op=CREATE&#39;
 
 # 4. Upload hadoop-examples.jar to /tmp/test.  Use a hadoop-examples.jar from a Hadoop install.
-curl -i -k -u bob:bob-password -T samples/hadoop-examples.jar -X PUT &#39;{Value Location header from command above}&#39;
+curl -i -k -u guest:guest-password -T samples/hadoop-examples.jar -X PUT &#39;{Value Location header from command above}&#39;
 
 # 5. Create the inode for a sample file README in /tmp/test/input
-curl -i -k -u bob:bob-password -X PUT \
+curl -i -k -u guest:guest-password -X PUT \
     &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/input/README?op=CREATE&#39;
 
 # 6. Upload readme.txt to /tmp/test/input.  Use the readme.txt in {GATEWAY_HOME}.
-curl -i -k -u bob:bob-password -T README -X PUT &#39;{Value of Location header from command above}&#39;
+curl -i -k -u guest:guest-password -T README -X PUT &#39;{Value of Location header from command above}&#39;
 
 # 7. Submit the word count job via WebHCat/Templeton.
 # Take note of the Job ID in the JSON response as this will be used in the next step.
-curl -v -i -k -u bob:bob-password -X POST \
+curl -v -i -k -u guest:guest-password -X POST \
     -d jar=/tmp/test/hadoop-examples.jar -d class=wordcount \
     -d arg=/tmp/test/input -d arg=/tmp/test/output \
     &#39;https://localhost:8443/gateway/sample/templeton/api/v1/mapreduce/jar&#39;
 
 # 8. Look at the status of the job
-curl -i -k -u bob:bob-password -X GET \
+curl -i -k -u guest:guest-password -X GET \
     &#39;https://localhost:8443/gateway/sample/templeton/api/v1/queue/{Job ID returned in JSON body from previous step}&#39;
 
 # 9. Look at the status of the job queue
-curl -i -k -u bob:bob-password -X GET \
+curl -i -k -u guest:guest-password -X GET \
     &#39;https://localhost:8443/gateway/sample/templeton/api/v1/queue&#39;
 
 # 10. List the contents of the output directory /tmp/test/output
-curl -i -k -u bob:bob-password -X GET \
+curl -i -k -u guest:guest-password -X GET \
     &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/output?op=LISTSTATUS&#39;
 
 # 11. Optionally cleanup the test directory
-curl -i -k -u bob:bob-password -X DELETE \
+curl -i -k -u guest:guest-password -X DELETE \
     &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test?op=DELETE&amp;recursive=true&#39;
 </code></pre><h4><a id="Example+#4:+WebHDFS+&+Oozie+via+cURL"></a>Example #4: WebHDFS &amp; Oozie via cURL</h4><p>The example below illustrates the sequence of curl commands that could be used to run a &ldquo;word count&rdquo; map reduce job via an Oozie workflow. It utilizes the hadoop-examples.jar from a Hadoop install for running a simple word count job. A copy of that jar has been included in the samples directory for convenience. Take care to follow the instructions below where replacement values are required. These replacement values are identified with { } markup.</p>
 <pre><code># 0. Optionally cleanup the test directory in case a previous example was run without cleaning up.
-curl -i -k -u bob:bob-password -X DELETE \
+curl -i -k -u guest:guest-password -X DELETE \
     &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test?op=DELETE&amp;recursive=true&#39;
 
 # 1. Create the inode for workflow definition file in /tmp/test
-curl -i -k -u bob:bob-password -X PUT \
+curl -i -k -u guest:guest-password -X PUT \
     &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/workflow.xml?op=CREATE&#39;
 
 # 2. Upload the workflow definition file.  This file can be found in {GATEWAY_HOME}/templates
-curl -i -k -u bob:bob-password -T templates/workflow-definition.xml -X PUT \
+curl -i -k -u guest:guest-password -T templates/workflow-definition.xml -X PUT \
     &#39;{Value Location header from command above}&#39;
 
 # 3. Create the inode for hadoop-examples.jar in /tmp/test/lib
-curl -i -k -u bob:bob-password -X PUT \
+curl -i -k -u guest:guest-password -X PUT \
     &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/lib/hadoop-examples.jar?op=CREATE&#39;
 
 # 4. Upload hadoop-examples.jar to /tmp/test/lib.  Use a hadoop-examples.jar from a Hadoop install.
-curl -i -k -u bob:bob-password -T samples/hadoop-examples.jar -X PUT \
+curl -i -k -u guest:guest-password -T samples/hadoop-examples.jar -X PUT \
     &#39;{Value Location header from command above}&#39;
 
 # 5. Create the inode for a sample input file readme.txt in /tmp/test/input.
-curl -i -k -u bob:bob-password -X PUT \
+curl -i -k -u guest:guest-password -X PUT \
     &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/input/README?op=CREATE&#39;
 
 # 6. Upload readme.txt to /tmp/test/input.  Use the readme.txt in {GATEWAY_HOME}.
 # The sample below uses this README file found in {GATEWAY_HOME}.
-curl -i -k -u bob:bob-password -T README -X PUT \
+curl -i -k -u guest:guest-password -T README -X PUT \
     &#39;{Value of Location header from command above}&#39;
 
 # 7. Create the job configuration file by replacing the {NameNode host:port} and {JobTracker host:port}
@@ -1660,19 +1660,19 @@ sed -e s/REPLACE.NAMENODE.RPCHOSTPORT/{N
 
 # 8. Submit the job via Oozie
 # Take note of the Job ID in the JSON response as this will be used in the next step.
-curl -i -k -u bob:bob-password -T workflow-configuration.xml -H Content-Type:application/xml -X POST \
+curl -i -k -u guest:guest-password -T workflow-configuration.xml -H Content-Type:application/xml -X POST \
     &#39;https://localhost:8443/gateway/sample/oozie/api/v1/jobs?action=start&#39;
 
 # 9. Query the job status via Oozie.
-curl -i -k -u bob:bob-password -X GET \
+curl -i -k -u guest:guest-password -X GET \
     &#39;https://localhost:8443/gateway/sample/oozie/api/v1/job/{Job ID returned in JSON body from previous step}&#39;
 
 # 10. List the contents of the output directory /tmp/test/output
-curl -i -k -u bob:bob-password -X GET \
+curl -i -k -u guest:guest-password -X GET \
     &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/output?op=LISTSTATUS&#39;
 
 # 11. Optionally cleanup the test directory
-curl -i -k -u bob:bob-password -X DELETE \
+curl -i -k -u guest:guest-password -X DELETE \
     &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test?op=DELETE&amp;recursive=true&#39;
 </code></pre><h3><a id="HBase"></a>HBase</h3><p>TODO</p><h4><a id="HBase+URL+Mapping"></a>HBase URL Mapping</h4><p>TODO</p><h4><a id="HBase+Examples"></a>HBase Examples</h4><p>TODO</p><p>The examples below illustrate the set of basic operations with HBase instance using Stargate REST API. Use following link to get more more details about HBase/Stargate API: <a href="http://wiki.apache.org/hadoop/Hbase/Stargate">http://wiki.apache.org/hadoop/Hbase/Stargate</a>.</p><h3><a id="HBase+Stargate+Setup"></a>HBase Stargate Setup</h3><h4><a id="Launch+Stargate"></a>Launch Stargate</h4><p>The command below launches the Stargate daemon on port 60080</p>
 <pre><code>sudo /usr/lib/hbase/bin/hbase-daemon.sh start rest -p 60080

Modified: incubator/knox/site/index.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/index.html?rev=1529828&r1=1529827&r2=1529828&view=diff
==============================================================================
--- incubator/knox/site/index.html (original)
+++ incubator/knox/site/index.html Mon Oct  7 12:58:59 2013
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 4, 2013 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 7, 2013 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20131004" />
+    <meta name="Date-Revision-yyyymmdd" content="20131007" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2013-10-04</span>
+                &nbsp;| <span id="publishDate">Last Published: 2013-10-07</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: incubator/knox/site/issue-tracking.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/issue-tracking.html?rev=1529828&r1=1529827&r2=1529828&view=diff
==============================================================================
--- incubator/knox/site/issue-tracking.html (original)
+++ incubator/knox/site/issue-tracking.html Mon Oct  7 12:58:59 2013
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 4, 2013 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 7, 2013 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20131004" />
+    <meta name="Date-Revision-yyyymmdd" content="20131007" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2013-10-04</span>
+                &nbsp;| <span id="publishDate">Last Published: 2013-10-07</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: incubator/knox/site/license.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/license.html?rev=1529828&r1=1529827&r2=1529828&view=diff
==============================================================================
--- incubator/knox/site/license.html (original)
+++ incubator/knox/site/license.html Mon Oct  7 12:58:59 2013
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 4, 2013 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 7, 2013 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20131004" />
+    <meta name="Date-Revision-yyyymmdd" content="20131007" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2013-10-04</span>
+                &nbsp;| <span id="publishDate">Last Published: 2013-10-07</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: incubator/knox/site/mail-lists.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/mail-lists.html?rev=1529828&r1=1529827&r2=1529828&view=diff
==============================================================================
--- incubator/knox/site/mail-lists.html (original)
+++ incubator/knox/site/mail-lists.html Mon Oct  7 12:58:59 2013
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 4, 2013 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 7, 2013 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20131004" />
+    <meta name="Date-Revision-yyyymmdd" content="20131007" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2013-10-04</span>
+                &nbsp;| <span id="publishDate">Last Published: 2013-10-07</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: incubator/knox/site/project-info.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/project-info.html?rev=1529828&r1=1529827&r2=1529828&view=diff
==============================================================================
--- incubator/knox/site/project-info.html (original)
+++ incubator/knox/site/project-info.html Mon Oct  7 12:58:59 2013
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 4, 2013 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 7, 2013 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20131004" />
+    <meta name="Date-Revision-yyyymmdd" content="20131007" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2013-10-04</span>
+                &nbsp;| <span id="publishDate">Last Published: 2013-10-07</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: incubator/knox/site/team-list.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/team-list.html?rev=1529828&r1=1529827&r2=1529828&view=diff
==============================================================================
--- incubator/knox/site/team-list.html (original)
+++ incubator/knox/site/team-list.html Mon Oct  7 12:58:59 2013
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 4, 2013 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 7, 2013 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20131004" />
+    <meta name="Date-Revision-yyyymmdd" content="20131007" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2013-10-04</span>
+                &nbsp;| <span id="publishDate">Last Published: 2013-10-07</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: incubator/knox/trunk/books/0.3.0/book_client-details.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/book_client-details.md?rev=1529828&r1=1529827&r2=1529828&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/book_client-details.md (original)
+++ incubator/knox/trunk/books/0.3.0/book_client-details.md Mon Oct  7 12:58:59 2013
@@ -112,7 +112,7 @@ Without this an error would have resulte
 Of course the DSL also provides a command to list the contents of a directory.
 
     knox:000> println Hdfs.ls( hadoop ).dir( "/tmp/example" ).now().string
-    {"FileStatuses":{"FileStatus":[{"accessTime":1363711366977,"blockSize":134217728,"group":"hdfs","length":19395,"modificationTime":1363711366977,"owner":"bob","pathSuffix":"README","permission":"644","replication":1,"type":"FILE"},{"accessTime":1363711375617,"blockSize":134217728,"group":"hdfs","length":19395,"modificationTime":1363711375617,"owner":"bob","pathSuffix":"README2","permission":"644","replication":1,"type":"FILE"}]}}
+    {"FileStatuses":{"FileStatus":[{"accessTime":1363711366977,"blockSize":134217728,"group":"hdfs","length":19395,"modificationTime":1363711366977,"owner":"guest","pathSuffix":"README","permission":"644","replication":1,"type":"FILE"},{"accessTime":1363711375617,"blockSize":134217728,"group":"hdfs","length":19395,"modificationTime":1363711375617,"owner":"guest","pathSuffix":"README2","permission":"644","replication":1,"type":"FILE"}]}}
 
 It is a design decision of the DSL to not provide type safe classes for various request and response payloads.
 Doing so would provide an undesirable coupling between the DSL and the service implementation.
@@ -149,8 +149,8 @@ This script file is available in the dis
     import groovy.json.JsonSlurper
     
     gateway = "https://localhost:8443/gateway/sandbox"
-    username = "bob"
-    password = "bob-password"
+    username = "guest"
+    password = "guest-password"
     dataFile = "README"
     
     hadoop = Hadoop.login( gateway, username, password )

Modified: incubator/knox/trunk/books/0.3.0/book_getting-started.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/book_getting-started.md?rev=1529828&r1=1529827&r2=1529828&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/book_getting-started.md (original)
+++ incubator/knox/trunk/books/0.3.0/book_getting-started.md Mon Oct  7 12:58:59 2013
@@ -308,7 +308,7 @@ For example the `sandbox.xml` file will 
 Invoke the LISTSATUS operation on WebHDFS via the gateway.
 This will return a directory listing of the root (i.e. /) directory of HDFS.
 
-    curl -i -k -u bob:bob-password -X GET \
+    curl -i -k -u guest:guest-password -X GET \
         'https://localhost:8443/gateway/sandbox/webhdfs/v1/?op=LISTSTATUS'
 
 The results of the above command should result in something to along the lines of the output below.

Modified: incubator/knox/trunk/books/0.3.0/config_authz.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/config_authz.md?rev=1529828&r1=1529827&r2=1529828&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/config_authz.md (original)
+++ incubator/knox/trunk/books/0.3.0/config_authz.md Mon Oct  7 12:58:59 2013
@@ -31,7 +31,7 @@ Note: In the examples below \{serviceNam
 
     <param>
         <name>{serviceName}.acl</name>
-        <value>bob;*;*</value>
+        <value>guest;*;*</value>
     </param>
 
 ###### USECASE-2: Restrict access to specific Hadoop services to specific Groups
@@ -56,7 +56,7 @@ Note: In the examples below \{serviceNam
     </param>
     <param>
         <name>{serviceName}.acl</name>
-        <value>bob;admin;*</value>
+        <value>guest;admin;*</value>
     </param>
 
 ###### USECASE-5: Restrict access to specific Hadoop services to specific Users OR users from specific Remote IPs
@@ -67,7 +67,7 @@ Note: In the examples below \{serviceNam
     </param>
     <param>
         <name>{serviceName}.acl</name>
-        <value>bob;*;127.0.0.1</value>
+        <value>guest;*;127.0.0.1</value>
     </param>
 
 ###### USECASE-6: Restrict access to specific Hadoop services to users within specific Groups OR from specific Remote IPs
@@ -89,21 +89,21 @@ Note: In the examples below \{serviceNam
     </param>
     <param>
         <name>{serviceName}.acl</name>
-        <value>bob;admin;127.0.0.1</value>
+        <value>guest;admin;127.0.0.1</value>
     </param>
 
 ###### USECASE-8: Restrict access to specific Hadoop services to specific Users AND users within specific Groups
 
     <param>
         <name>{serviceName}.acl</name>
-        <value>bob;admin;*</value>
+        <value>guest;admin;*</value>
     </param>
 
 ###### USECASE-9: Restrict access to specific Hadoop services to specific Users AND users from specific Remote IPs
 
     <param>
         <name>{serviceName}.acl</name>
-        <value>bob;*;127.0.0.1</value>
+        <value>guest;*;127.0.0.1</value>
     </param>
 
 ###### USECASE-10: Restrict access to specific Hadoop services to users within specific Groups AND from specific Remote IPs
@@ -117,7 +117,7 @@ Note: In the examples below \{serviceNam
 
     <param>
         <name>{serviceName}.acl</name>
-        <value>bob;admins;127.0.0.1</value>
+        <value>guest;admins;127.0.0.1</value>
     </param>
 
 #### Configuration ####
@@ -186,7 +186,7 @@ For instance:
 
     <param>
         <name>webhdfs.acl</name>
-        <value>hdfs,bob;admin;127.0.0.2,127.0.0.3</value>
+        <value>hdfs,guest;admin;127.0.0.2,127.0.0.3</value>
     </param>
 
 You may also set the ACL processing mode at the top level for the topology. This essentially sets the default for the managed cluster.
@@ -199,7 +199,7 @@ It may then be overridden at the service
 
 this configuration indicates that ONE of the following must be satisfied to be granted access:
 
-1. the user is "hdfs" or "bob" OR
+1. the user is "hdfs" or "guest" OR
 2. the user is in "admin" group OR
 3. the request is coming from 127.0.0.2 or 127.0.0.3
 
@@ -221,7 +221,7 @@ For instance:
 
     <param>
         <name>principal.mapping</name>
-        <value>bob=hdfs</value>
+        <value>guest=hdfs</value>
     </param>
 
 In addition, we allow the administrator to map groups to effective principals. This is done through another param within the identity assertion provider:
@@ -278,7 +278,7 @@ An example of a full topology that illus
                 <enabled>true</enabled>
                 <param>
                     <name>principal.mapping</name>
-                    <value>bob=hdfs;</value>
+                    <value>guest=hdfs;</value>
                 </param>
                 <param>
                     <name>group.principal.mapping</name>

Modified: incubator/knox/trunk/books/0.3.0/config_id_assertion.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/config_id_assertion.md?rev=1529828&r1=1529827&r2=1529828&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/config_id_assertion.md (original)
+++ incubator/knox/trunk/books/0.3.0/config_id_assertion.md Mon Oct  7 12:58:59 2013
@@ -40,7 +40,7 @@ This particular configuration indicates 
         <enabled>true</enabled>
         <param>
             <name>principal.mapping</name>
-            <value>bob=hdfs;</value>
+            <value>guest=hdfs;</value>
         </param>
         <param>
             <name>group.principal.mapping</name>
@@ -48,7 +48,7 @@ This particular configuration indicates 
         </param>
     </provider>
 
-This configuration identifies the same identity assertion provider but does provide principal and group mapping rules. In this case, when a user is authenticated as "bob" his identity is actually asserted to the Hadoop cluster as "hdfs". In addition, since there are group principal mappings defined, he will also be considered as a member of the groups "users" and "admin". In this particular example the wildcard "*" is used to indicate that all authenticated users need to be considered members of the "users" group and that only the user "hdfs" is mapped to be a member of the "admin" group.
+This configuration identifies the same identity assertion provider but does provide principal and group mapping rules. In this case, when a user is authenticated as "guest" his identity is actually asserted to the Hadoop cluster as "hdfs". In addition, since there are group principal mappings defined, he will also be considered as a member of the groups "users" and "admin". In this particular example the wildcard "*" is used to indicate that all authenticated users need to be considered members of the "users" group and that only the user "hdfs" is mapped to be a member of the "admin" group.
 
 **NOTE: These group memberships are currently only meaningful for Service Level Authorization using the AclsAuthorization provider. The groups are not currently asserted to the Hadoop cluster at this time. See the Authorization section within this guide to see how this is used.**
 
@@ -71,14 +71,14 @@ For instance:
 
     <param>
         <name>principal.mapping</name>
-        <value>bob=hdfs</value>
+        <value>guest=hdfs</value>
     </param>
 
 For multiple mappings:
 
     <param>
         <name>principal.mapping</name>
-        <value>bob,alice=hdfs;mary=alice2</value>
+        <value>guest,alice=hdfs;mary=alice2</value>
     </param>
 
 #### Group Principal Mapping ####

Modified: incubator/knox/trunk/books/0.3.0/service_oozie.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/service_oozie.md?rev=1529828&r1=1529827&r2=1529828&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/service_oozie.md (original)
+++ incubator/knox/trunk/books/0.3.0/service_oozie.md Mon Oct  7 12:58:59 2013
@@ -55,8 +55,8 @@ Each line from the file below will need 
     gateway = "https://localhost:8443/gateway/sandbox"
     jobTracker = "sandbox:50300";
     nameNode = "sandbox:8020";
-    username = "bob"
-    password = "bob-password"
+    username = "guest"
+    password = "guest-password"
     inputFile = "LICENSE"
     jarFile = "samples/hadoop-examples.jar"
 
@@ -131,52 +131,52 @@ Take care to follow the instructions bel
 These replacement values are identified with { } markup.
 
     # 0. Optionally cleanup the test directory in case a previous example was run without cleaning up.
-    curl -i -k -u bob:bob-password -X DELETE \
+    curl -i -k -u guest:guest-password -X DELETE \
         'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test?op=DELETE&recursive=true'
 
     # 1. Create a test input directory /tmp/test/input
-    curl -i -k -u bob:bob-password -X PUT \
+    curl -i -k -u guest:guest-password -X PUT \
         'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/input?op=MKDIRS'
 
     # 2. Create a test output directory /tmp/test/input
-    curl -i -k -u bob:bob-password -X PUT \
+    curl -i -k -u guest:guest-password -X PUT \
         'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/output?op=MKDIRS'
 
     # 3. Create the inode for hadoop-examples.jar in /tmp/test
-    curl -i -k -u bob:bob-password -X PUT \
+    curl -i -k -u guest:guest-password -X PUT \
         'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/hadoop-examples.jar?op=CREATE'
 
     # 4. Upload hadoop-examples.jar to /tmp/test.  Use a hadoop-examples.jar from a Hadoop install.
-    curl -i -k -u bob:bob-password -T samples/hadoop-examples.jar -X PUT '{Value Location header from command above}'
+    curl -i -k -u guest:guest-password -T samples/hadoop-examples.jar -X PUT '{Value Location header from command above}'
 
     # 5. Create the inode for a sample file README in /tmp/test/input
-    curl -i -k -u bob:bob-password -X PUT \
+    curl -i -k -u guest:guest-password -X PUT \
         'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/input/README?op=CREATE'
 
     # 6. Upload readme.txt to /tmp/test/input.  Use the readme.txt in {GATEWAY_HOME}.
-    curl -i -k -u bob:bob-password -T README -X PUT '{Value of Location header from command above}'
+    curl -i -k -u guest:guest-password -T README -X PUT '{Value of Location header from command above}'
 
     # 7. Submit the word count job via WebHCat/Templeton.
     # Take note of the Job ID in the JSON response as this will be used in the next step.
-    curl -v -i -k -u bob:bob-password -X POST \
+    curl -v -i -k -u guest:guest-password -X POST \
         -d jar=/tmp/test/hadoop-examples.jar -d class=wordcount \
         -d arg=/tmp/test/input -d arg=/tmp/test/output \
         'https://localhost:8443/gateway/sample/templeton/api/v1/mapreduce/jar'
 
     # 8. Look at the status of the job
-    curl -i -k -u bob:bob-password -X GET \
+    curl -i -k -u guest:guest-password -X GET \
         'https://localhost:8443/gateway/sample/templeton/api/v1/queue/{Job ID returned in JSON body from previous step}'
 
     # 9. Look at the status of the job queue
-    curl -i -k -u bob:bob-password -X GET \
+    curl -i -k -u guest:guest-password -X GET \
         'https://localhost:8443/gateway/sample/templeton/api/v1/queue'
 
     # 10. List the contents of the output directory /tmp/test/output
-    curl -i -k -u bob:bob-password -X GET \
+    curl -i -k -u guest:guest-password -X GET \
         'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/output?op=LISTSTATUS'
 
     # 11. Optionally cleanup the test directory
-    curl -i -k -u bob:bob-password -X DELETE \
+    curl -i -k -u guest:guest-password -X DELETE \
         'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test?op=DELETE&recursive=true'
 
 #### Example #4: WebHDFS & Oozie via cURL
@@ -188,32 +188,32 @@ Take care to follow the instructions bel
 These replacement values are identified with { } markup.
 
     # 0. Optionally cleanup the test directory in case a previous example was run without cleaning up.
-    curl -i -k -u bob:bob-password -X DELETE \
+    curl -i -k -u guest:guest-password -X DELETE \
         'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test?op=DELETE&recursive=true'
 
     # 1. Create the inode for workflow definition file in /tmp/test
-    curl -i -k -u bob:bob-password -X PUT \
+    curl -i -k -u guest:guest-password -X PUT \
         'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/workflow.xml?op=CREATE'
 
     # 2. Upload the workflow definition file.  This file can be found in {GATEWAY_HOME}/templates
-    curl -i -k -u bob:bob-password -T templates/workflow-definition.xml -X PUT \
+    curl -i -k -u guest:guest-password -T templates/workflow-definition.xml -X PUT \
         '{Value Location header from command above}'
 
     # 3. Create the inode for hadoop-examples.jar in /tmp/test/lib
-    curl -i -k -u bob:bob-password -X PUT \
+    curl -i -k -u guest:guest-password -X PUT \
         'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/lib/hadoop-examples.jar?op=CREATE'
 
     # 4. Upload hadoop-examples.jar to /tmp/test/lib.  Use a hadoop-examples.jar from a Hadoop install.
-    curl -i -k -u bob:bob-password -T samples/hadoop-examples.jar -X PUT \
+    curl -i -k -u guest:guest-password -T samples/hadoop-examples.jar -X PUT \
         '{Value Location header from command above}'
 
     # 5. Create the inode for a sample input file readme.txt in /tmp/test/input.
-    curl -i -k -u bob:bob-password -X PUT \
+    curl -i -k -u guest:guest-password -X PUT \
         'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/input/README?op=CREATE'
 
     # 6. Upload readme.txt to /tmp/test/input.  Use the readme.txt in {GATEWAY_HOME}.
     # The sample below uses this README file found in {GATEWAY_HOME}.
-    curl -i -k -u bob:bob-password -T README -X PUT \
+    curl -i -k -u guest:guest-password -T README -X PUT \
         '{Value of Location header from command above}'
 
     # 7. Create the job configuration file by replacing the {NameNode host:port} and {JobTracker host:port}
@@ -228,17 +228,17 @@ These replacement values are identified 
 
     # 8. Submit the job via Oozie
     # Take note of the Job ID in the JSON response as this will be used in the next step.
-    curl -i -k -u bob:bob-password -T workflow-configuration.xml -H Content-Type:application/xml -X POST \
+    curl -i -k -u guest:guest-password -T workflow-configuration.xml -H Content-Type:application/xml -X POST \
         'https://localhost:8443/gateway/sample/oozie/api/v1/jobs?action=start'
 
     # 9. Query the job status via Oozie.
-    curl -i -k -u bob:bob-password -X GET \
+    curl -i -k -u guest:guest-password -X GET \
         'https://localhost:8443/gateway/sample/oozie/api/v1/job/{Job ID returned in JSON body from previous step}'
 
     # 10. List the contents of the output directory /tmp/test/output
-    curl -i -k -u bob:bob-password -X GET \
+    curl -i -k -u guest:guest-password -X GET \
         'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/output?op=LISTSTATUS'
 
     # 11. Optionally cleanup the test directory
-    curl -i -k -u bob:bob-password -X DELETE \
+    curl -i -k -u guest:guest-password -X DELETE \
         'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test?op=DELETE&recursive=true'