You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@knox.apache.org by km...@apache.org on 2016/01/15 16:24:46 UTC

svn commit: r1724836 [4/5] - in /knox: site/ site/books/knox-0-7-0/ trunk/books/0.7.0/

Modified: knox/trunk/books/0.7.0/config_advanced_ldap.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/config_advanced_ldap.md?rev=1724836&r1=1724835&r2=1724836&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/config_advanced_ldap.md (original)
+++ knox/trunk/books/0.7.0/config_advanced_ldap.md Fri Jan 15 15:24:45 2016
@@ -7,17 +7,17 @@ You could instead enable advanced config
 #### Problem with  userDnTemplate based Authentication 
 
 UserDnTemplate based authentication uses configuration parameter ldapRealm.userDnTemplate.
-Typical value of userDNTemplate would look like uid={0},ou=people,dc=hadoop,dc=apache,dc=org.
+Typical value of userDNTemplate would look like `uid={0},ou=people,dc=hadoop,dc=apache,dc=org`.
  
 To compute bind DN of the client, we swap the place holder {0} with login id provided by the client.
 For example, if the login id provided by the client is  "guest',  
-the computed bind DN would be uid=guest,ou=people,dc=hadoop,dc=apache,dc=org.
+the computed bind DN would be `uid=guest,ou=people,dc=hadoop,dc=apache,dc=org`.
  
 This keeps configuration simple.
 
 However, this does not work if users belong to different branches of LDAP DIT.
-For example, if there are some users under ou=people,dc=hadoop,dc=apache,dc=org 
-and some users under ou=contractors,dc=hadoop,dc=apache,dc=org,  
+For example, if there are some users under `ou=people,dc=hadoop,dc=apache,dc=org` 
+and some users under `ou=contractors,dc=hadoop,dc=apache,dc=org`,  
 we can not come up with userDnTemplate that would work for all the users.
 
 #### Using advanced LDAP Authentication
@@ -28,30 +28,35 @@ instead of interpolating bind DN from us
 
 #### Example search filter to find the client bind DN
  
-Assuming,  
-ldapRealm.userSearchAttributeName=uid
-ldapRealm.userObjectClass=person
-client  specified login id =  "guest"
+Assuming
+
+* ldapRealm.userSearchAttributeName=uid
+* ldapRealm.userObjectClass=person
+* client specified login id = "guest"
  
 LDAP Filter for doing a search to find the bind DN would be
-(&(uid=guest)(objectclass=person))
+
+    (&(uid=guest)(objectclass=person))
 
 This could find bind DN to be 
-uid=guest,ou=people,dc=hadoop,dc=apache,dc=org
+
+    uid=guest,ou=people,dc=hadoop,dc=apache,dc=org
 
 Please note that the userSearchAttributeName need not be part of bindDN.
 
 For example, you could use 
 
-ldapRealm.userSearchAttributeName=email
-ldapRealm.userObjectClass=person
-client  specified login id =  "bill.clinton@gmail.com"
+* ldapRealm.userSearchAttributeName=email
+* ldapRealm.userObjectClass=person
+* client specified login id =  "bill.clinton@gmail.com"
 
 LDAP Filter for doing a search to find the bind DN would be
-(&(email=bill.clinton@gmail.com)(objectclass=person))
 
-This could find bind DN to be 
-uid=billc,ou=contractors,dc=hadoop,dc=apache,dc=org
+    (&(email=bill.clinton@gmail.com)(objectclass=person))
+
+This could find bind DN to be
+
+    uid=billc,ou=contractors,dc=hadoop,dc=apache,dc=org
 
 #### Example provider configuration to use advanced LDAP authentication
 
@@ -59,185 +64,180 @@ The example configuration appears verbos
 and illustration of optional parameters and default values.
 The configuration that you would use could be much shorter if you rely on default values.
 
-<provider>
-
-	<role>authentication</role>
-	<name>ShiroProvider</name>
-	<enabled>true</enabled>
-
-	<param>
-		<name>main.ldapRealm</name>
-		<value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>
-	</param>
-
-	<param>
-		<name>main.ldapContextFactory</name>
-		<value>org.apache.hadoop.gateway.shirorealm.KnoxLdapContextFactory
-		</value>
-	</param>
-
-	<param>
-		<name>main.ldapRealm.contextFactory</name>
-		<value>$ldapContextFactory</value>
-	</param>
-
-	<!-- update the value based on your ldap directory protocol, host and port -->
-	<param>
-		<name>main.ldapRealm.contextFactory.url</name>
-		<value>ldap://hdp.example.com:389</value>
-	</param>
-
-	<!-- optional, default value: simple
-	     Update the value based on mechanisms supported by your ldap directory -->
-	<param>
-		<name>main.ldapRealm.contextFactory.authenticationMechanism</name>
-		<value>simple</value>
-	</param>
-
-	<!-- optional, default value: {0}
-       update the value based on your ldap DIT(directory information tree).
-       ignored if value is defined for main.ldapRealm.userSearchAttributeName -->
-	<param>
-		<name>main.ldapRealm.userDnTemplate</name>
-		<value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
-	</param>
-
-	<!-- optional, default value: null
-	     If you specify a value for this attribute, useDnTemplate 
-		   specified above would be ignored and user bind DN would be computed using
-		   ldap search
-	     update the value based on your ldap DIT(directory information layout)
-	     value of search attribute should identity the user uniquely -->
-	<param>
-		<name>main.ldapRealm.userSearchAttributeName</name>
-		<value>uid</value>
-	</param>
-
-	<!-- optional, default value: false  
-	     If the value is true, groups in which user is a member are looked up 
-	     from LDAP and made available  for service level authorization checks -->
-	<param>
-		<name>main.ldapRealm.authorizationEnabled</name>
-		<value>true</value>
-	</param>
-
-	<!-- bind DN used to search for groups and user bind DN.  
-	     Required if a value is defined for main.ldapRealm.userSearchAttributeName
-	     or if the value of main.ldapRealm.authorizationEnabled is true -->
-	<param>
-		<name>main.ldapRealm.contextFactory.systemUsername</name>
-		<value>uid=guest,ou=people,dc=hadoop,dc=apache,dc=org</value>
-	</param>
-
-	<!-- password for systemUserName.
-	     Required if a value is defined for main.ldapRealm.userSearchAttributeName
-       or if the value of main.ldapRealm.authorizationEnabled is true -->
-	<param>
-		<name>main.ldapRealm.contextFactory.systemPassword</name>
-		<value>${ALIAS=ldcSystemPassword}</value>
-	</param>
-
-	<!-- optional, default value: simple
-	     Update the value based on mechanisms supported by your ldap directory -->
-	<param>
-		<name>main.ldapRealm.contextFactory.systemAuthenticationMechanism</name>
-		<value>simple</value>
-	</param>
-
-	<!-- optional, default value: person
-	     Objectclass to identify user entries in ldap, used to build search 
-		   filter to search for user bind DN -->
-	<param>
-		<name>main.ldapRealm.userObjectClass</name>
-		<value>person</value>
-	</param>
-
-	<!-- search base used to search for user bind DN and groups -->
-	<param>
-		<name>main.ldapRealm.searchBase</name>
-		<value>dc=hadoop,dc=apache,dc=org</value>
-	</param>
-
-	<!-- search base used to search for user bind DN.
-	     Defaults to the value of main.ldapRealm.searchBase. 
-	     If main.ldapRealm.userSearchAttributeName is defined, 
-	     value for main.ldapRealm.searchBase  or main.ldapRealm.userSearchBase 
-	     should be defined -->
-	<param>
-		<name>main.ldapRealm.userSearchBase</name>
-		<value>dc=hadoop,dc=apache,dc=org</value>
-	</param>
-
-	<!-- search base used to search for groups.
-	     Defaults to the value of main.ldapRealm.searchBase.
-		   If value of main.ldapRealm.authorizationEnabled is true,
-	     value for main.ldapRealm.searchBase  or main.ldapRealm.groupSearchBase should be defined -->
-	<param>
-		<name>main.ldapRealm.groupSearchBase</name>
-		<value>dc=hadoop,dc=apache,dc=org</value>
-	</param>
-
-	<!-- optional, default value: groupOfNames
-	     Objectclass to identify group entries in ldap, used to build search 
-       filter to search for group entries --> 
-	<param>
-		<name>main.ldapRealm.groupObjectClass</name>
-		<value>groupOfNames</value>
-	</param>
-  
-	<!-- optional, default value: member
-	     If value is memberUrl, we treat found groups as dynamic groups -->
-	<param>
-		<name>main.ldapRealm.memberAttribute</name>
-		<value>member</value>
-	</param>
-
-	<!-- optional, default value: uid={0}
-       Ignored if value is defined for main.ldapRealm.userSearchAttributeName -->
-  <param>
-    <name>main.ldapRealm.memberAttributeValueTemplate</name>
-    <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
-  </param>
-  
-	<!-- optional, default value: cn -->
-	<param>
-		<name>main.ldapRealm.groupIdAttribute</name>
-		<value>cn</value>
-	</param>
-
-	<param>
-		<name>urls./**</name>
-		<value>authcBasic</value>
-	</param>
-
-	<!-- optional, default value: 30min -->
-	<param>
-		<name>sessionTimeout</name>
-		<value>30</value>
-	</param>
-
-</provider>
-
+    <provider>
+    
+        <role>authentication</role>
+        <name>ShiroProvider</name>
+        <enabled>true</enabled>
+        
+        <param>
+            <name>main.ldapRealm</name>
+            <value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>
+        </param>
+        
+        <param>
+            <name>main.ldapContextFactory</name>
+            <value>org.apache.hadoop.gateway.shirorealm.KnoxLdapContextFactory</value>
+        </param>
+        
+        <param>
+            <name>main.ldapRealm.contextFactory</name>
+            <value>$ldapContextFactory</value>
+        </param>
+        
+        <!-- update the value based on your ldap directory protocol, host and port -->
+        <param>
+            <name>main.ldapRealm.contextFactory.url</name>
+            <value>ldap://hdp.example.com:389</value>
+        </param>
+        
+        <!-- optional, default value: simple
+             Update the value based on mechanisms supported by your ldap directory -->
+        <param>
+            <name>main.ldapRealm.contextFactory.authenticationMechanism</name>
+            <value>simple</value>
+        </param>
+        
+        <!-- optional, default value: {0}
+             update the value based on your ldap DIT(directory information tree).
+             ignored if value is defined for main.ldapRealm.userSearchAttributeName -->
+        <param>
+            <name>main.ldapRealm.userDnTemplate</name>
+            <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        
+        <!-- optional, default value: null
+             If you specify a value for this attribute, useDnTemplate 
+             specified above would be ignored and user bind DN would be computed using
+             ldap search
+             update the value based on your ldap DIT(directory information layout)
+             value of search attribute should identity the user uniquely -->
+        <param>
+            <name>main.ldapRealm.userSearchAttributeName</name>
+            <value>uid</value>
+        </param>
+        
+        <!-- optional, default value: false  
+             If the value is true, groups in which user is a member are looked up 
+             from LDAP and made available  for service level authorization checks -->
+        <param>
+            <name>main.ldapRealm.authorizationEnabled</name>
+            <value>true</value>
+        </param>
+        
+        <!-- bind DN used to search for groups and user bind DN.  
+             Required if a value is defined for main.ldapRealm.userSearchAttributeName
+             or if the value of main.ldapRealm.authorizationEnabled is true -->
+        <param>
+            <name>main.ldapRealm.contextFactory.systemUsername</name>
+            <value>uid=guest,ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        
+        <!-- password for systemUserName.
+             Required if a value is defined for main.ldapRealm.userSearchAttributeName
+             or if the value of main.ldapRealm.authorizationEnabled is true -->
+        <param>
+            <name>main.ldapRealm.contextFactory.systemPassword</name>
+            <value>${ALIAS=ldcSystemPassword}</value>
+        </param>
+        
+        <!-- optional, default value: simple
+             Update the value based on mechanisms supported by your ldap directory -->
+        <param>
+            <name>main.ldapRealm.contextFactory.systemAuthenticationMechanism</name>
+            <value>simple</value>
+        </param>
+        
+        <!-- optional, default value: person
+             Objectclass to identify user entries in ldap, used to build search 
+             filter to search for user bind DN -->
+        <param>
+            <name>main.ldapRealm.userObjectClass</name>
+            <value>person</value>
+        </param>
+        
+        <!-- search base used to search for user bind DN and groups -->
+        <param>
+            <name>main.ldapRealm.searchBase</name>
+            <value>dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        
+        <!-- search base used to search for user bind DN.
+             Defaults to the value of main.ldapRealm.searchBase. 
+             If main.ldapRealm.userSearchAttributeName is defined, 
+             value for main.ldapRealm.searchBase  or main.ldapRealm.userSearchBase 
+             should be defined -->
+        <param>
+            <name>main.ldapRealm.userSearchBase</name>
+            <value>dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        
+        <!-- search base used to search for groups.
+             Defaults to the value of main.ldapRealm.searchBase.
+             If value of main.ldapRealm.authorizationEnabled is true,
+             value for main.ldapRealm.searchBase  or main.ldapRealm.groupSearchBase should be defined -->
+        <param>
+            <name>main.ldapRealm.groupSearchBase</name>
+            <value>dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        
+        <!-- optional, default value: groupOfNames
+             Objectclass to identify group entries in ldap, used to build search 
+             filter to search for group entries --> 
+        <param>
+            <name>main.ldapRealm.groupObjectClass</name>
+            <value>groupOfNames</value>
+        </param>
+        
+        <!-- optional, default value: member
+             If value is memberUrl, we treat found groups as dynamic groups -->
+        <param>
+            <name>main.ldapRealm.memberAttribute</name>
+            <value>member</value>
+        </param>
+        
+        <!-- optional, default value: uid={0}
+             Ignored if value is defined for main.ldapRealm.userSearchAttributeName -->
+        <param>
+            <name>main.ldapRealm.memberAttributeValueTemplate</name>
+            <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        
+        <!-- optional, default value: cn -->
+        <param>
+            <name>main.ldapRealm.groupIdAttribute</name>
+            <value>cn</value>
+        </param>
+        
+        <param>
+            <name>urls./**</name>
+            <value>authcBasic</value>
+        </param>
+        
+        <!-- optional, default value: 30min -->
+        <param>
+            <name>sessionTimeout</name>
+            <value>30</value>
+        </param>
+        
+    </provider>
+        
 #### Special note on parameter main.ldapRealm.contextFactory.systemPassword
 
 The value for this could have one of the following 2 formats
 
-plaintextpassword
-${ALIAS=ldcSystemPassword}
+* plaintextpassword
+* ${ALIAS=ldcSystemPassword}
 
 The first format specifies the password in plain text in the provider configuration.
 Use of this format should be limited for testing and troubleshooting.
 
-We strongly recommend using the second format ${ALIAS=ldcSystemPassword}
-n production. This format uses an alias for the password stored in credential store.
-In the example ${ALIAS=ldcSystemPassword}, 
+We strongly recommend using the second format `${ALIAS=ldcSystemPassword}` in production.
+This format uses an alias for the password stored in credential store.
+In the example `${ALIAS=ldcSystemPassword}`, 
 ldcSystemPassword is the alias for the password stored in credential store.
 
-Assuming plain text password is "hadoop", and your topology file name is "hdp.xml",
+Assuming the plain text password is "hadoop", and your topology file name is "hdp.xml",
 you would use following command to create the right password alias in credential store.
 
-$gateway_home/bin/knoxcli.sh  create-alias ldcSystemPassword --cluster hdp --value hadoop
-
-
-
-
+    {GATEWAY_HOME}/bin/knoxcli.sh  create-alias ldcSystemPassword --cluster hdp --value hadoop

Modified: knox/trunk/books/0.7.0/config_audit.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/config_audit.md?rev=1724836&r1=1724835&r2=1724836&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/config_audit.md (original)
+++ knox/trunk/books/0.7.0/config_audit.md Fri Jan 15 15:24:45 2016
@@ -18,26 +18,26 @@
 ### Audit ###
 
 The Audit facility within the Knox Gateway introduces functionality for tracking actions that are executed by Knox per user's request or that are produced by Knox internal events like topology deploy, etc.
-The Knox Audit module is based on the [Apache log4j](http://logging.apache.org/log4j/1.2/).
+The Knox Audit module is based on [Apache log4j](http://logging.apache.org/log4j/1.2/).
 
 #### Configuration needed ####
 
-Out of the box, the Knox Gateway includes preconfigured auditing capabilities. To change its configuration please read following sections.
+Out of the box, the Knox Gateway includes preconfigured auditing capabilities. To change its configuration please read the following sections.
 
 #### Where audit logs go ####
 
-Audit module is preconfigured to write audit records to the log file `/var/log/knox/gateway-audit.log`.
+The Audit module is preconfigured to write audit records to the log file `{GATEWAY_HOME}/log/gateway-audit.log`.
 
-This behavior can be changed in the `conf/gateway-log4j.properties` file. `log4j.appender.auditfile.*` properties determine this behavior. For detailed information read [Apache log4j](http://logging.apache.org/log4j/1.2/).
+This behavior can be changed in the `{GATEWAY_HOME}/conf/gateway-log4j.properties` file. `app.audit.file` can be used to change the location. The `log4j.appender.auditfile.*` properties can be used for further customization. For detailed information read the [Apache log4j](http://logging.apache.org/log4j/1.2/) documentation.
 
 #### Audit format ####
 
-Out of the box, the audit record format is defined by org.apache.hadoop.gateway.audit.log4j.layout.AuditLayout.
-Its structure is following:
+Out of the box, the audit record format is defined by `org.apache.hadoop.gateway.audit.log4j.layout.AuditLayout`.
+Its structure is as follows:
 
-	EVENT_PUBLISHING_TIME ROOT_REQUEST_ID|PARENT_REQUEST_ID|REQUEST_ID|LOGGER_NAME|TARGET_SERVICE_NAME|USER_NAME|PROXY_USER_NAME|SYSTEM_USER_NAME|ACTION|RESOURCE_TYPE|RESOURCE_NAME|OUTCOME|LOGGING_MESSAGE
+    EVENT_PUBLISHING_TIME ROOT_REQUEST_ID|PARENT_REQUEST_ID|REQUEST_ID|LOGGER_NAME|TARGET_SERVICE_NAME|USER_NAME|PROXY_USER_NAME|SYSTEM_USER_NAME|ACTION|RESOURCE_TYPE|RESOURCE_NAME|OUTCOME|LOGGING_MESSAGE
 
-The audit record format can be changed by setting `log4j.appender.auditfile.layout` property in `conf/gateway-log4j.properties` to another class that extends org.apache.log4j.Layout or its subclasses.
+The audit record format can be changed by setting `log4j.appender.auditfile.layout` property in `{GATEWAY_HOME}/conf/gateway-log4j.properties` to another class that extends `org.apache.log4j.Layout` or its subclasses.
 
 For detailed information read [Apache log4j](http://logging.apache.org/log4j/1.2/).
 
@@ -65,14 +65,8 @@ LOGGING_MESSAGE| Logging message. Contai
 Audit logging is preconfigured with `org.apache.log4j.DailyRollingFileAppender`.
 [Apache log4j](http://logging.apache.org/log4j/1.2/) contains information about other Appenders.
 
-#### How to change audit level or disable it ####
-
-Audit configuration is stored in the `conf/gateway-log4j.properties` file.
+#### How to change the audit level or disable it ####
 
 All audit messages are logged at `INFO` level and this behavior can't be changed.
 
-To change audit configuration `log4j.logger.audit*` and `log4j.appender.auditfile*` properties in `conf/gateway-log4j.properties` file should be modified.
-
-Their meaning can be found in [Apache log4j](http://logging.apache.org/log4j/1.2/).
-
-Disabling auditing can be done by decreasing log level for appender.
+Disabling auditing can be done by decreasing the log level for the Audit appender or setting it to `OFF`.

Modified: knox/trunk/books/0.7.0/config_authn.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/config_authn.md?rev=1724836&r1=1724835&r2=1724836&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/config_authn.md (original)
+++ knox/trunk/books/0.7.0/config_authn.md Fri Jan 15 15:24:45 2016
@@ -53,13 +53,13 @@ Conversely, the Shiro provider currently
 
 The following example illustrates a configuration of the bundled BASIC/LDAP authentication config in a shiro.ini file:
 
-	[urls]
-	/**=authcBasic
-	[main]
-	ldapRealm=org.apache.shiro.realm.ldap.JndiLdapRealm
-	ldapRealm.contextFactory.authenticationMechanism=simple
-	ldapRealm.contextFactory.url=ldap://localhost:33389
-	ldapRealm.userDnTemplate=uid={0},ou=people,dc=hadoop,dc=apache,dc=org
+    [urls]
+    /**=authcBasic
+    [main]
+    ldapRealm=org.apache.shiro.realm.ldap.JndiLdapRealm
+    ldapRealm.contextFactory.authenticationMechanism=simple
+    ldapRealm.contextFactory.url=ldap://localhost:33389
+    ldapRealm.userDnTemplate=uid={0},ou=people,dc=hadoop,dc=apache,dc=org
 
 In order to fit into the context of an INI file format, at deployment time we interrogate the parameters provided in the provider configuration and parse the INI section out of the parameter names. The following provider config illustrates this approach. Notice that the section names in the above shiro.ini match the beginning of the param names that are in the following config:
 
@@ -96,9 +96,9 @@ This happens to be the way that we are c
 
 This section discusses the LDAP configuration used above for the Shiro Provider. Some of these configuration elements will need to be customized to reflect your deployment environment.
 
-**main.ldapRealm** - this element indicates the fully qualified classname of the Shiro realm to be used in authenticating the user. The classname provided by default in the sample is the `org.apache.shiro.realm.ldap.JndiLdapRealm` this implementation provides us with the ability to authenticate but by default has authorization disabled. In order to provide authorization - which is seen by Shiro as dependent on an LDAP schema that is specific to each organization - an extension of JndiLdapRealm is generally used to override and implement the doGetAuhtorizationInfo method. In this particular release we are providing a simple authorization provider that can be used along with the Shiro authentication provider.
+**main.ldapRealm** - this element indicates the fully qualified class name of the Shiro realm to be used in authenticating the user. The class name provided by default in the sample is the `org.apache.shiro.realm.ldap.JndiLdapRealm` this implementation provides us with the ability to authenticate but by default has authorization disabled. In order to provide authorization - which is seen by Shiro as dependent on an LDAP schema that is specific to each organization - an extension of JndiLdapRealm is generally used to override and implement the doGetAuhtorizationInfo method. In this particular release we are providing a simple authorization provider that can be used along with the Shiro authentication provider.
 
-**main.ldapRealm.userDnTemplate** - in order to bind a simple username to an LDAP server that generally requires a full distinguished name (DN), we must provide the template into which the simple username will be inserted. This template allows for the creation of a DN by injecting the simple username into the common name (CN) portion of the DN. **This element will need to be customized to reflect your deployment environment.** The template provided in the sample is only an example and is valid only within the LDAP schema distributed with Knox and is represented by the users.ldif file in the {GATEWAY_HOME}/conf directory.
+**main.ldapRealm.userDnTemplate** - in order to bind a simple username to an LDAP server that generally requires a full distinguished name (DN), we must provide the template into which the simple username will be inserted. This template allows for the creation of a DN by injecting the simple username into the common name (CN) portion of the DN. **This element will need to be customized to reflect your deployment environment.** The template provided in the sample is only an example and is valid only within the LDAP schema distributed with Knox and is represented by the users.ldif file in the `{GATEWAY_HOME}/conf` directory.
 
 **main.ldapRealm.contextFactory.url** - this element is the URL that represents the host and port of LDAP server. It also includes the scheme of the protocol to use. This may be either ldap or ldaps depending on whether you are communicating with the LDAP over SSL (highly recommended). **This element will need to be customized to reflect your deployment environment.**.
 
@@ -113,10 +113,12 @@ You would use LDAP configuration as docu
 Some Active Directory specific things to keep in mind:
 
 Typical AD main.ldapRealm.userDnTemplate value looks slightly different, such as
+
     cn={0},cn=users,DC=lab,DC=sample,dc=com
 
-Please compare this with a typical Apache DS main.ldapRealm.userDnTemplate value and make note of the difference.
-    uid={0},ou=people,dc=hadoop,dc=apache,dc=org
+Please compare this with a typical Apache DS main.ldapRealm.userDnTemplate value and make note of the difference:
+
+    `uid={0},ou=people,dc=hadoop,dc=apache,dc=org`
 
 If your AD is configured to authenticate based on just the cn and password and does not require user DN, you do not have to specify value for  main.ldapRealm.userDnTemplate.
 
@@ -132,7 +134,7 @@ In order to communicate with your LDAP s
 
 Knox maps each cluster topology to a web application and leverages standard JavaEE session management.
 
-To configure session idle timeout for the topology, please specify value of parameter sessionTimeout for ShiroProvider in your topology file.  If you do not specify the value for this parameter, it defaults to 30minutes.
+To configure session idle timeout for the topology, please specify value of parameter sessionTimeout for ShiroProvider in your topology file. If you do not specify the value for this parameter, it defaults to 30 minutes.
 
 The definition would look like the following in the topoloogy file:
 
@@ -154,8 +156,4 @@ The definition would look like the follo
     <provider>
     ...
 
-
-At present, ShiroProvider in Knox leverages JavaEE session to maintain authentication state for a user across requests using JSESSIONID cookie.  So, a client that authenticated with Knox could pass the JSESSIONID cookie with repeated requests as long as the session has not timed out instead of submitting userid/password with every request.  Presenting a valid session cookie in place of userid/password would also perform better as additional credential store lookups are avoided.
-
-
-
+At present, ShiroProvider in Knox leverages JavaEE session to maintain authentication state for a user across requests using JSESSIONID cookie. So, a client that authenticated with Knox could pass the JSESSIONID cookie with repeated requests as long as the session has not timed out instead of submitting userid/password with every request. Presenting a valid session cookie in place of userid/password would also perform better as additional credential store lookups are avoided.
\ No newline at end of file

Modified: knox/trunk/books/0.7.0/config_authz.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/config_authz.md?rev=1724836&r1=1724835&r2=1724836&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/config_authz.md (original)
+++ knox/trunk/books/0.7.0/config_authz.md Fri Jan 15 15:24:45 2016
@@ -23,7 +23,93 @@ The Knox Gateway has an out-of-the-box a
 
 This provider utilizes a simple and familiar pattern of using ACLs to protect Hadoop resources by specifying users, groups and ip addresses that are permitted access.
 
-Note: In the examples below \{serviceName\} represents a real service name (e.g. WEBHDFS) and would be replaced with these values in an actual configuration.
+#### Configuration ####
+
+ACLs are bound to services within the topology descriptors by introducing the authorization provider with configuration like:
+
+    <provider>
+        <role>authorization</role>
+        <name>AclsAuthz</name>
+        <enabled>true</enabled>
+    </provider>
+
+The above configuration enables the authorization provider but does not indicate any ACLs yet and therefore there is no restriction to accessing the Hadoop services. In order to indicate the resources to be protected and the specific users, groups or ip's to grant access, we need to provide parameters like the following:
+
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>username[,*|username...];group[,*|group...];ipaddr[,*|ipaddr...]</value>
+    </param>
+    
+where `{serviceName}` would need to be the name of a configured Hadoop service within the topology.
+
+NOTE: ipaddr is unique among the parts of the ACL in that you are able to specify a wildcard within an ipaddr to indicate that the remote address must being with the String prior to the asterisk within the ipaddr acl. For instance:
+
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>*;*;192.168.*</value>
+    </param>
+    
+This indicates that the request must come from an IP address that begins with '192.168.' in order to be granted access.
+
+Note also that configuration without any ACLs defined is equivalent to:
+
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>*;*;*</value>
+    </param>
+
+meaning: all users, groups and IPs have access.
+Each of the elements of the acl param support multiple values via comma separated list and the `*` wildcard to match any.
+
+For instance:
+
+    <param>
+        <name>webhdfs.acl</name>
+        <value>hdfs;admin;127.0.0.2,127.0.0.3</value>
+    </param>
+
+this configuration indicates that ALL of the following have to be satisfied to be granted access:
+
+1. The user name must be "hdfs" AND
+2. the user must be in the group "admin" AND
+3. the user must come from either 127.0.0.2 or 127.0.0.3
+
+This allows us to craft policy that restricts the members of a large group to a subset that should have access.
+The user being removed from the group will allow access to be denied even though their username may have been in the ACL.
+
+An additional configuration element may be used to alter the processing of the ACL to be OR instead of the default AND behavior:
+
+    <param>
+        <name>{serviceName}.acl.mode</name>
+        <value>OR</value>
+    </param>
+
+this processing behavior requires that the effective user satisfy one of the parts of the ACL definition in order to be granted access.
+For instance:
+
+    <param>
+        <name>webhdfs.acl</name>
+        <value>hdfs,guest;admin;127.0.0.2,127.0.0.3</value>
+    </param>
+
+You may also set the ACL processing mode at the top level for the topology. This essentially sets the default for the managed cluster.
+It may then be overridden at the service level as well.
+
+    <param>
+        <name>acl.mode</name>
+        <value>OR</value>
+    </param>
+
+this configuration indicates that ONE of the following must be satisfied to be granted access:
+
+1. The user is "hdfs" or "guest" OR
+2. the user is in "admin" group OR
+3. the request is coming from 127.0.0.2 or 127.0.0.3
+
+
+Following are a few concrete examples on how to use this feature.
+
+Note: In the examples below `{serviceName}` represents a real service name (e.g. WEBHDFS) and would be replaced with these values in an actual configuration.
 
 ##### Usecases #####
 
@@ -120,127 +206,11 @@ Note: In the examples below \{serviceNam
         <value>guest;admins;127.0.0.1</value>
     </param>
 
-#### Configuration ####
-
-ACLs are bound to services within the topology descriptors by introducing the authorization provider with configuration like:
-
-    <provider>
-        <role>authorization</role>
-        <name>AclsAuthz</name>
-        <enabled>true</enabled>
-    </provider>
-
-The above configuration enables the authorization provider but does not indicate any ACLs yet and therefore there is no restriction to accessing the Hadoop services. In order to indicate the resources to be protected and the specific users, groups or ip's to grant access, we need to provide parameters like the following:
-
-    <param>
-        <name>{serviceName}.acl</name>
-        <value>username[,*|username...];group[,*|group...];ipaddr[,*|ipaddr...]</value>
-    </param>
-    
-where `{serviceName}` would need to be the name of a configured Hadoop service within the topology.
-
-NOTE: ipaddr is unique among the parts of the ACL in that you are able to specify a wildcard within an ipaddr to indicate that the remote address must being with the String prior to the asterisk within the ipaddr acl. For instance:
-
-    <param>
-        <name>{serviceName}.acl</name>
-        <value>*;*;192.168.*</value>
-    </param>
-    
-This indicates that the request must come from an IP address that begins with '192.168.' in order to be granted access.
-
-Note also that configuration without any ACLs defined is equivalent to:
-
-    <param>
-        <name>{serviceName}.acl</name>
-        <value>*;*;*</value>
-    </param>
-
-meaning: all users, groups and IPs have access.
-Each of the elements of the acl param support multiple values via comma separated list and the `*` wildcard to match any.
-
-For instance:
-
-    <param>
-        <name>webhdfs.acl</name>
-        <value>hdfs;admin;127.0.0.2,127.0.0.3</value>
-    </param>
-
-this configuration indicates that ALL of the following are satisfied:
-
-1. the user "hdfs" has access AND
-2. users in the group "admin" have access AND
-3. any authenticated user from either 127.0.0.2 or 127.0.0.3 will have access
-
-This allows us to craft policy that restricts the members of a large group to a subset that should have access.
-The user being removed from the group will allow access to be denied even though their username may have been in the ACL.
-
-An additional configuration element may be used to alter the processing of the ACL to be OR instead of the default AND behavior:
-
-    <param>
-        <name>{serviceName}.acl.mode</name>
-        <value>OR</value>
-    </param>
-
-this processing behavior requires that the effective user satisfy one of the parts of the ACL definition in order to be granted access.
-For instance:
-
-    <param>
-        <name>webhdfs.acl</name>
-        <value>hdfs,guest;admin;127.0.0.2,127.0.0.3</value>
-    </param>
-
-You may also set the ACL processing mode at the top level for the topology. This essentially sets the default for the managed cluster.
-It may then be overridden at the service level as well.
-
-    <param>
-        <name>acl.mode</name>
-        <value>OR</value>
-    </param>
-
-this configuration indicates that ONE of the following must be satisfied to be granted access:
-
-1. the user is "hdfs" or "guest" OR
-2. the user is in "admin" group OR
-3. the request is coming from 127.0.0.2 or 127.0.0.3
-
-#### Other Related Configuration ####
+###### USECASE-12: Full example including identity assertion/principal mapping ######
 
 The principal mapping aspect of the identity assertion provider is important to understand in order to fully utilize the authorization features of this provider.
 
-This feature allows us to map the authenticated principal to a runas or impersonated principal to be asserted to the Hadoop services in the backend.
-When a principal mapping is defined that results in an impersonated principal being created the impersonated principal is then the effective principal.
-If there is no mapping to another principal then the authenticated or primary principal is then the effective principal.
-Principal mapping has actually been available in the identity assertion provider from the beginning of Knox and is documented fully in the Identity Assertion section of this guide.
-
-    <param>
-        <name>principal.mapping</name>
-        <value>{primaryPrincipal}[,...]={impersonatedPrincipal}[;...]</value>
-    </param>
-
-For instance:
-
-    <param>
-        <name>principal.mapping</name>
-        <value>guest=hdfs</value>
-    </param>
-
-In addition, we allow the administrator to map groups to effective principals. This is done through another param within the identity assertion provider:
-
-    <param>
-        <name>group.principal.mapping</name>
-        <value>{userName[,*|userName...]}={groupName[,groupName...]}[,...]</value>
-    </param>
-
-For instance:
-
-    <param>
-        <name>group.principal.mapping</name>
-        <value>*=users;hdfs=admin</value>
-    </param>
-
-this configuration indicates that all (*) authenticated users are members of the "users" group and that user "hdfs" is a member of the admin group. Group principal mapping has been added along with the authorization provider described in this document.
-
-For more information on principal and group principal mapping see the Identity Assertion section of this guide.
+This feature allows us to map the authenticated principal to a runas or impersonated principal to be asserted to the Hadoop services in the backend. It is fully documented in the Identity Assertion section of this guide.
 
 These additional mapping capabilities are used together with the authorization ACL policy.
 An example of a full topology that illustrates these together is below.
@@ -317,33 +287,33 @@ An example of a full topology that illus
             </provider>
         </gateway>
 
-		<service>
-        	<role>JOBTRACKER</role>
-        	<url>rpc://localhost:8050</url>
-    	</service>
-
-    	<service>
-        	<role>WEBHDFS</role>
-        	<url>http://localhost:50070/webhdfs</url>
-    	</service>
-
-    	<service>
-        	<role>WEBHCAT</role>
-        	<url>http://localhost:50111/templeton</url>
-    	</service>
-
-    	<service>
-        	<role>OOZIE</role>
-        	<url>http://localhost:11000/oozie</url>
-    	</service>
-
-    	<service>
-        	<role>WEBHBASE</role>
-        	<url>http://localhost:60080</url>
-    	</service>
-
-    	<service>
-        	<role>HIVE</role>
-        	<url>http://localhost:10001/cliservice</url>
-    	</service>
+        <service>
+            <role>JOBTRACKER</role>
+            <url>rpc://localhost:8050</url>
+        </service>
+
+        <service>
+            <role>WEBHDFS</role>
+            <url>http://localhost:50070/webhdfs</url>
+        </service>
+
+        <service>
+            <role>WEBHCAT</role>
+            <url>http://localhost:50111/templeton</url>
+        </service>
+
+        <service>
+            <role>OOZIE</role>
+            <url>http://localhost:11000/oozie</url>
+        </service>
+
+        <service>
+            <role>WEBHBASE</role>
+            <url>http://localhost:8080</url>
+        </service>
+
+        <service>
+            <role>HIVE</role>
+            <url>http://localhost:10001/cliservice</url>
+        </service>
     </topology>

Modified: knox/trunk/books/0.7.0/config_ha.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/config_ha.md?rev=1724836&r1=1724835&r2=1724836&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/config_ha.md (original)
+++ knox/trunk/books/0.7.0/config_ha.md Fri Jan 15 15:24:45 2016
@@ -17,17 +17,19 @@
 
 ### High Availability ###
 
+This describes how Knox itself can be made highly available.
+
 #### Configure Knox instances ####
 
-All Knox instances must be synced to use the same topologies credentials keystores.
-These files are located under {GATEWAY_HOME}/conf/security/keystores/{TOPOLOGY_NAME}-credentials.jceks.
+All Knox instances must be synced to use the same topology credential keystores.
+These files are located under `{GATEWAY_HOME}/conf/security/keystores/{TOPOLOGY_NAME}-credentials.jceks`.
 They are generated after the first topology deployment.
-Currently these files can be synced just manually. There is no automation tool.
+Currently these files need to be synced manually.
 Here are the steps to sync topologies credentials keystores:
 
-1. Choose Knox instance that will be the source for topologies credentials keystores. Let's call it keystores master
-1. Replace topologies credentials keystores in the other Knox instance with topologies credentials keystores from keystores master
-1. Restart Knox instances
+1. Choose a Knox instance that will be the source for topology credential keystores. Let's call it _keystores master_
+2. Replace the topology credential keystores in the other Knox instances with topology credential keystores from the _keystores master_
+3. Restart Knox instances
 
 #### High Availability with Apache HTTP Server + mod_proxy + mod_proxy_balancer ####
 
@@ -43,7 +45,7 @@ openssl-devel is required for Apache Mod
 
 Apache HTTP Server 2.4.6 or later is required. See this document for installing and setting up Apache HTTP Server: http://httpd.apache.org/docs/2.4/install.html
 
-Hint: pass --enable-ssl to ./configure command to enable Apache Module mod_ssl generation.
+Hint: pass `--enable-ssl` to the `./configure` command to enable the generation of the Apache Module _mod_ssl_.
 
 ###### Apache Module mod_proxy ######
 

Modified: knox/trunk/books/0.7.0/config_id_assertion.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/config_id_assertion.md?rev=1724836&r1=1724835&r2=1724836&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/config_id_assertion.md (original)
+++ knox/trunk/books/0.7.0/config_id_assertion.md Fri Jan 15 15:24:45 2016
@@ -33,7 +33,7 @@ The following configuration is required
         <enabled>true</enabled>
     </provider>
 
-This particular configuration indicates that the Default identity assertion provider is enabled and that there are no principal mapping rules to apply to identities flowing from the authentication in the gateway to the backend Hadoop cluster services. The primary principal of the current subject will therefore be asserted via a query parameter or as a form parameter - ie. ?user.name={primaryPrincipal}
+This particular configuration indicates that the Default identity assertion provider is enabled and that there are no principal mapping rules to apply to identities flowing from the authentication in the gateway to the backend Hadoop cluster services. The primary principal of the current subject will therefore be asserted via a query parameter or as a form parameter - ie. `?user.name={primaryPrincipal}`
 
     <provider>
         <role>identity-assertion</role>
@@ -57,7 +57,7 @@ The principal mapping aspect of the iden
 
 This feature allows us to map the authenticated principal to a runas or impersonated principal to be asserted to the Hadoop services in the backend.
 
-When a principal mapping is defined that results in an impersonated principal being created the impersonated principal is then the effective principal.
+When a principal mapping is defined that results in an impersonated principal, this impersonated principal is then the effective principal.
 
 If there is no mapping to another principal then the authenticated or primary principal is then the effective principal.
 
@@ -108,8 +108,8 @@ The following configuration would conver
         <name>Concat</name>
         <enabled>true</enabled>
         <param>
-          <name>concat.suffix</name>
-          <value>_domain1</value>
+            <name>concat.suffix</name>
+            <value>_domain1</value>
         </param>
     </provider>
 
@@ -127,7 +127,7 @@ Param | Description
 ------|-----------
 input | This is a regular expression that will be applied to the incoming identity. The most critical part of the regular expression is the group notation within the expression. In regular expressions, groups are expressed within parenthesis. For example in the regular expression "(.*)@(.*?)\..*" there are two groups. When this regular expression is applied to "nobody@us.imaginary.tld" group 1 matches "nobody" and group 2 matches "us". 
 output| This is a template that assembles the result identity. The result is assembled from the static text and the matched groups from the input regular expression. In addition, the matched group values can be looked up in the lookup table. An output value of "{1}_{2}" of will result in "nobody_us".                 
-lookup| This lookup table provides a simple (albeit limited) way to translate text in the incoming identities. This configuration takes the form of "=" separated name values pairs separated by ";". For example an lookup setting is "us=USA;ca=CANADA". The lookup is invoked in the output setting by surrounding the desired group number in square brackets (i.e. []). Putting it all together, output setting of "{1}_[{2}]" combined with input of "(.*)@(.*?)\..*" and lookup of "us=USA;ca=CANADA" will turn "nobody@us.imaginary.tld" into "nobody@USA".      
+lookup| This lookup table provides a simple (albeit limited) way to translate text in the incoming identities. This configuration takes the form of "=" separated name values pairs separated by ";". For example a lookup setting is "us=USA;ca=CANADA". The lookup is invoked in the output setting by surrounding the desired group number in square brackets (i.e. []). Putting it all together, output setting of "{1}_[{2}]" combined with input of "(.*)@(.*?)\..*" and lookup of "us=USA;ca=CANADA" will turn "nobody@us.imaginary.tld" into "nobody@USA".      
 
 Within the topology file the provider configuration might look like this.
 

Modified: knox/trunk/books/0.7.0/config_kerberos.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/config_kerberos.md?rev=1724836&r1=1724835&r2=1724836&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/config_kerberos.md (original)
+++ knox/trunk/books/0.7.0/config_kerberos.md Fri Jan 15 15:24:45 2016
@@ -17,9 +17,8 @@
 
 ### Secure Clusters ###
 
-See these documents for setting up a secure Hadoop cluster
-http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html#Configuration_in_Secure_Mode
-http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.3.1/bk_installing_manually_book/content/rpm-chap14.html
+See the Hadoop documentation for setting up a secure Hadoop cluster
+http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html
 
 Once you have a Hadoop cluster that is using Kerberos for authentication, you have to do the following to configure Knox to work with that cluster.
 
@@ -36,8 +35,7 @@ ssh into your host running KDC
 
     kadmin.local
     add_principal -randkey knox/knox@EXAMPLE.COM
-    ktadd -norandkey -k /etc/security/keytabs/knox.service.keytab
-    ktadd -k /etc/security/keytabs/knox.service.keytab -norandkey knox/knox@EXAMPLE.COM
+    ktadd -k knox.service.keytab -norandkey knox/knox@EXAMPLE.COM
     exit
 
 
@@ -47,18 +45,18 @@ Add unix account for the knox user on Kn
 
     useradd -g hadoop knox
 
-Copy knox.service.keytab created on KDC host on to your Knox host /etc/knox/conf/knox.service.keytab
+Copy knox.service.keytab created on KDC host on to your Knox host `{GATEWAY_HOME}/conf/knox.service.keytab`
 
     chown knox knox.service.keytab
     chmod 400 knox.service.keytab
 
-#### Update krb5.conf at /etc/knox/conf/krb5.conf on Knox host ####
+#### Update `krb5.conf` at `{GATEWAY_HOME}/conf/krb5.conf` on Knox host ####
 
-You could copy the `templates/krb5.conf` file provided in the Knox binary download and customize it to suit your cluster.
+You could copy the `{GATEWAY_HOME}/templates/krb5.conf` file provided in the Knox binary download and customize it to suit your cluster.
 
 #### Update `krb5JAASLogin.conf` at `/etc/knox/conf/krb5JAASLogin.conf` on Knox host ####
 
-You could copy the `templates/krb5JAASLogin.conf` file provided in the Knox binary download and customize it to suit your cluster.
+You could copy the `{GATEWAY_HOME}/templates/krb5JAASLogin.conf` file provided in the Knox binary download and customize it to suit your cluster.
 
 #### Update `gateway-site.xml` on Knox host ####
 

Modified: knox/trunk/books/0.7.0/config_knox_sso.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/config_knox_sso.md?rev=1724836&r1=1724835&r2=1724836&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/config_knox_sso.md (original)
+++ knox/trunk/books/0.7.0/config_knox_sso.md Fri Jan 15 15:24:45 2016
@@ -3,9 +3,9 @@
 ### Introduction
 ---
 
-Authentication of the Hadoop component UIs, and those of the overall ecosystem, is usually limited to Kerberos (which requires SPNEGO to be configured for the user's browser) and simple/psuedo. This often results in the UIs not being secured - even in secured clusters. This is where KnoxSSO provides value for through providing WebSSO capabilities to the Hadoop cluster.
+Authentication of the Hadoop component UIs, and those of the overall ecosystem, is usually limited to Kerberos (which requires SPNEGO to be configured for the user's browser) and simple/psuedo. This often results in the UIs not being secured - even in secured clusters. This is where KnoxSSO provides value by providing WebSSO capabilities to the Hadoop cluster.
 
-By leveraging the hadoop-auth module in Hadoop common, we have introduced the ability to consume a common SSO cookie for web UIs while retaining the non-web browser authentication through kerberos/SPNEGO. We do this by extneding the AltKerberosAuthenticationHandler class which provides the useragent based multiplexing. 
+By leveraging the hadoop-auth module in Hadoop common, we have introduced the ability to consume a common SSO cookie for web UIs while retaining the non-web browser authentication through kerberos/SPNEGO. We do this by extending the AltKerberosAuthenticationHandler class which provides the useragent based multiplexing. 
 
 We also provide integration guidance within the developers guide for other applications to be able to participate in these SSO capabilities.
 
@@ -18,76 +18,74 @@ This document describes the overall setu
 ### KnoxSSO Setup
 
 #### knoxsso.xml Topology
-To enable KnoxSSO, we need to configure the KnoxSSO topology. The following is an example of this topology which is configured to use HTTP Basic Auth against the Knox Demo LDAP server. This is the lowest barrier of entry for your development environment that actually authenticates against a real user store. What’s great is if you work against the IdP with Basic Auth then you will work with SAML or anything else as well. SAML support is provided through our PicketLink federation provider and we will provide an example configuration for that as well.
+To enable KnoxSSO, we need to configure the KnoxSSO topology. The following is an example of this topology which is configured to use HTTP Basic Auth against the Knox Demo LDAP server. This is the lowest barrier of entry for your development environment that actually authenticates against a real user store. What's great is if you work against the IdP with Basic Auth then you will work with SAML or anything else as well. SAML support is provided through our PicketLink federation provider and we will provide an example configuration for that as well.
 
-```
-		<?xml version="1.0" encoding="utf-8"?>
-		<topology>
-    		<gateway>
-        		<provider>
-            		<role>authentication</role>
-            		<name>ShiroProvider</name>
-            		<enabled>true</enabled>
-            		<param>
-	                	<name>sessionTimeout</name>
-                		<value>30</value>
-            		</param>
-            		<param>
-                		<name>main.ldapRealm</name>
-                		<value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>
-            		</param>
-            		<param>
-                		<name>main.ldapContextFactory</name>
-                		<value>org.apache.hadoop.gateway.shirorealm.KnoxLdapContextFactory</value>
-            		</param>
-            		<param>
-                		<name>main.ldapRealm.contextFactory</name>
-                		<value>$ldapContextFactory</value>
-            		</param>
-            		<param>
-                		<name>main.ldapRealm.userDnTemplate</name>
-                		<value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
-            		</param>
-            		<param>
-                		<name>main.ldapRealm.contextFactory.url</name>
-                		<value>ldap://localhost:33389</value>
-            		</param>
-            		<param>
-                		<name>main.ldapRealm.contextFactory.authenticationMechanism</name>
-                		<value>simple</value>
-            		</param>
-            		<param>
-                		<name>urls./**</name>
-                		<value>authcBasic</value>
-            		</param>
-        		</provider>
-		        <provider>
-        		    <role>identity-assertion</role>
-            		<name>Default</name>
-            		<enabled>true</enabled>
-        		</provider>
-    		</gateway>
-		    <service>
-        		<role>KNOXSSO</role>
-        		<param>
-          			<name>knoxsso.cookie.secure.only</name>
-          			<value>true</value>
-        		</param>
-        		<param>
-          			<name>knoxsso.token.ttl</name>
-          			<value>100000</value>
-        		</param>
-        		<param>
-          			<name>knoxsso.redirect.whitelist.regex</name>
-          			<value>^/.*$;https?://localhost*$</value>
-        		</param>
-        		<param>
-          			<name>knoxsso.cookie.domain.suffix</name>
-          			<value>.novalocal</value>
-        		</param>
-    		</service>
-		</topology>
-```
+    <?xml version="1.0" encoding="utf-8"?>
+    <topology>
+        <gateway>
+            <provider>
+                <role>authentication</role>
+                <name>ShiroProvider</name>
+                <enabled>true</enabled>
+                <param>
+                    <name>sessionTimeout</name>
+                    <value>30</value>
+                </param>
+                <param>
+                    <name>main.ldapRealm</name>
+                    <value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>
+                </param>
+                <param>
+                    <name>main.ldapContextFactory</name>
+                    <value>org.apache.hadoop.gateway.shirorealm.KnoxLdapContextFactory</value>
+                </param>
+                <param>
+                    <name>main.ldapRealm.contextFactory</name>
+                    <value>$ldapContextFactory</value>
+                </param>
+                <param>
+                    <name>main.ldapRealm.userDnTemplate</name>
+                    <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+                </param>
+                <param>
+                    <name>main.ldapRealm.contextFactory.url</name>
+                    <value>ldap://localhost:33389</value>
+                </param>
+                <param>
+                    <name>main.ldapRealm.contextFactory.authenticationMechanism</name>
+                    <value>simple</value>
+                </param>
+                <param>
+                    <name>urls./**</name>
+                    <value>authcBasic</value>
+                </param>
+            </provider>
+            <provider>
+                <role>identity-assertion</role>
+                <name>Default</name>
+                <enabled>true</enabled>
+            </provider>
+        </gateway>
+        <service>
+            <role>KNOXSSO</role>
+            <param>
+                <name>knoxsso.cookie.secure.only</name>
+                <value>true</value>
+            </param>
+            <param>
+                <name>knoxsso.token.ttl</name>
+                <value>100000</value>
+            </param>
+            <param>
+                <name>knoxsso.redirect.whitelist.regex</name>
+                <value>^/.*$;https?://localhost*$</value>
+            </param>
+            <param>
+                <name>knoxsso.cookie.domain.suffix</name>
+                <value>.novalocal</value>
+            </param>
+        </service>
+    </topology>
 
 Just as with any Knox service, the KNOXSSO service is protected by the gateway providers defined above it. In this case, the ShiroProvider is taking care of HTTP Basic Auth against LDAP for us. Once the user authenticates the request processing continues to the KNOXSSO service that will create the required cookie and do the necessary redirects.
 
@@ -97,62 +95,59 @@ This is a good place to start in the set
 
 This topology will result in a KnoxSSO URL that looks something like:
 
-	https://{gateway_host}:{gateway_port}/gateway/knoxsso/api/v1/websso
+    https://{gateway_host}:{gateway_port}/gateway/knoxsso/api/v1/websso
 
 This URL is needed when configuring applications that participate in KnoxSSO for a given deployment. We will refer to this as the Provider URL in this document.
 
 #### KnoxSSO Configuration Parameters
 
-Parameter | Description | Default
---------- |----------- |----------- 
-knoxsso.cookie.secure.only | This determines whether the browser is allowed to send the cookie over unsecured channels. This should always be set to true in production systems. If during development a relying party is not running ssl then you can turn this off. Running with it off exposes the cookie and underlying token for capture and replay by others. | true
-knoxsso.cookie.max.age | optional: This indicates that a cookie can only live for a specified amount of time - in seconds. This should probably be left to the default which makes it a session cookie. Session cookies are discarded once the browser session is closed. | session
-knoxsso.cookie.domain.suffix | optional: This indicates the portion of the request hostname that represents the domain to be used for the cookie domain. For single host development scenarios the default behavior should be fine. For production deployments, the expected domain should be set and all configured URLs that are related to SSO should use this domain. Otherwise, the cookie will not be presented by the browser to mismatched URLs. | Default cookie domain or a domain derived from a hostname that includes more than 2 dots.
-knoxsso.token.ttl | This indicates the lifespan of the token within the cookie. Once it expires a new cookie must be acquired from KnoxSSO. This is in milliseconds. The 36000000 in the topology above gives you 10 hrs. | 30000 That is 30 seconds.
-knoxsso.token.audiences | This is a comma separated list of audiences to add to the JWT token. This is used to ensure that a token received by a participating application knows that the token was intended for use with that application. It is optional. In the event that an application has expected audiences and they are not present the token must be rejected. In the event where the token has audiences and the application has none expected then the token is accepted. OPEN ISSUE - not currently being populated in WebSSOResource. | empty
+Parameter                        | Description | Default
+-------------------------------- |------------ |----------- 
+knoxsso.cookie.secure.only       | This determines whether the browser is allowed to send the cookie over unsecured channels. This should always be set to true in production systems. If during development a relying party is not running ssl then you can turn this off. Running with it off exposes the cookie and underlying token for capture and replay by others. | true
+knoxsso.cookie.max.age           | optional: This indicates that a cookie can only live for a specified amount of time - in seconds. This should probably be left to the default which makes it a session cookie. Session cookies are discarded once the browser session is closed. | session
+knoxsso.cookie.domain.suffix     | optional: This indicates the portion of the request hostname that represents the domain to be used for the cookie domain. For single host development scenarios the default behavior should be fine. For production deployments, the expected domain should be set and all configured URLs that are related to SSO should use this domain. Otherwise, the cookie will not be presented by the browser to mismatched URLs. | Default cookie domain or a domain derived from a hostname that includes more than 2 dots.
+knoxsso.token.ttl                | This indicates the lifespan of the token within the cookie. Once it expires a new cookie must be acquired from KnoxSSO. This is in milliseconds. The 36000000 in the topology above gives you 10 hrs. | 30000 That is 30 seconds.
+knoxsso.token.audiences          | This is a comma separated list of audiences to add to the JWT token. This is used to ensure that a token received by a participating application knows that the token was intended for use with that application. It is optional. In the event that an application has expected audiences and they are not present the token must be rejected. In the event where the token has audiences and the application has none expected then the token is accepted. OPEN ISSUE - not currently being populated in WebSSOResource. | empty
 knoxsso.redirect.whitelist.regex | A semicolon separated list of regex expressions. The incoming originalUrl must match one of the expressions in order for KnoxSSO to redirect to it after authentication. Defaults to only relative paths and localhost with or without SSL for development usecases. This needs to be opened up for production use and actual participating applications. Note that cookie use is still constrained to redirect destinations in the same domain as the KnoxSSO service - regardless of the expressions specified here. | ^/.\*$;^https?://localhost:\\d{0,9}/.\*$
 
 
 ### Participating Application Configuration
 #### Hadoop Configuration Example
-The following is used as the KnoxSSO configuration in the Hadoop JWTRedirectAuthenticationHandler implementation. Any participating application will need similar configuration. Since JWTRedirectAuthenticationHandler extends the AltKerberosAuthenticationHandler, the typical kerberos configuration parameters for authentication are also required.
+The following is used as the KnoxSSO configuration in the Hadoop JWTRedirectAuthenticationHandler implementation. Any participating application will need similar configuration. Since JWTRedirectAuthenticationHandler extends the AltKerberosAuthenticationHandler, the typical Kerberos configuration parameters for authentication are also required.
+
+
+    <property>
+        <name>hadoop.http.authentication.type</name
+        <value>org.apache.hadoop.security.authentication.server.JWTRedirectAuthenticationHandler</value>
+    </property>
 
-```
-	<property>
-  		<name>hadoop.http.authentication.type</name>	<value>org.apache.hadoop.security.authentication.server.JWTRedirectAuthenticationHandler</value>
-	</property>
-```
 
 This is the handler classname in Hadoop auth for JWT token (KnoxSSO) support.
 
-```
-	<property>
-  		<name>hadoop.http.authentication.authentication.provider.url</name>
-  		<value>http://c6401.ambari.apache.org:8888/knoxsso</value>
-	</property>
-```
 
-The above property is the SSO provider URL that points to the knoxsso endpoint.
+    <property>
+        <name>hadoop.http.authentication.authentication.provider.url</name>
+        <value>http://c6401.ambari.apache.org:8888/knoxsso</value>
+    </property>
 
-```
-	<property>
-   		<name>hadoop.http.authentication.public.key.pem</name>
-   		<value>MIICVjCCAb+gAwIBAgIJAPPvOtuTxFeiMA0GCSqGSIb3DQEBBQUAMG0xCzAJBgNV
-   	BAYTAlVTMQ0wCwYDVQQIEwRUZXN0MQ0wCwYDVQQHEwRUZXN0MQ8wDQYDVQQKEwZI
-   	YWRvb3AxDTALBgNVBAsTBFRlc3QxIDAeBgNVBAMTF2M2NDAxLmFtYmFyaS5hcGFj
-   	aGUub3JnMB4XDTE1MDcxNjE4NDcyM1oXDTE2MDcxNTE4NDcyM1owbTELMAkGA1UE
-   	BhMCVVMxDTALBgNVBAgTBFRlc3QxDTALBgNVBAcTBFRlc3QxDzANBgNVBAoTBkhh
-   	ZG9vcDENMAsGA1UECxMEVGVzdDEgMB4GA1UEAxMXYzY0MDEuYW1iYXJpLmFwYWNo
-   	ZS5vcmcwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMFs/rymbiNvg8lDhsdA
-   	qvh5uHP6iMtfv9IYpDleShjkS1C+IqId6bwGIEO8yhIS5BnfUR/fcnHi2ZNrXX7x
-   	QUtQe7M9tDIKu48w//InnZ6VpAqjGShWxcSzR6UB/YoGe5ytHS6MrXaormfBg3VW
-   	tDoy2MS83W8pweS6p5JnK7S5AgMBAAEwDQYJKoZIhvcNAQEFBQADgYEANyVg6EzE
-   	2q84gq7wQfLt9t047nYFkxcRfzhNVL3LB8p6IkM4RUrzWq4kLA+z+bpY2OdpkTOe
-   	wUpEdVKzOQd4V7vRxpdANxtbG/XXrJAAcY/S+eMy1eDK73cmaVPnxPUGWmMnQXUi
-   	TLab+w8tBQhNbq6BOQ42aOrLxA8k/M4cV1A=</value>
-	</property>
-```
 
-The above property holds the KnoxSSO server’s public key for signature verification. Adding it directly to the config like this is convenient and is easily done through Ambari to existing config files that take custom properties. Config is generally protected as root access only as well - so it is a pretty good solution.
+The above property is the SSO provider URL that points to the knoxsso endpoint.
 
+    <property>
+        <name>hadoop.http.authentication.public.key.pem</name>
+        <value>MIICVjCCAb+gAwIBAgIJAPPvOtuTxFeiMA0GCSqGSIb3DQEBBQUAMG0xCzAJBgNV
+      BAYTAlVTMQ0wCwYDVQQIEwRUZXN0MQ0wCwYDVQQHEwRUZXN0MQ8wDQYDVQQKEwZI
+      YWRvb3AxDTALBgNVBAsTBFRlc3QxIDAeBgNVBAMTF2M2NDAxLmFtYmFyaS5hcGFj
+      aGUub3JnMB4XDTE1MDcxNjE4NDcyM1oXDTE2MDcxNTE4NDcyM1owbTELMAkGA1UE
+      BhMCVVMxDTALBgNVBAgTBFRlc3QxDTALBgNVBAcTBFRlc3QxDzANBgNVBAoTBkhh
+      ZG9vcDENMAsGA1UECxMEVGVzdDEgMB4GA1UEAxMXYzY0MDEuYW1iYXJpLmFwYWNo
+      ZS5vcmcwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMFs/rymbiNvg8lDhsdA
+      qvh5uHP6iMtfv9IYpDleShjkS1C+IqId6bwGIEO8yhIS5BnfUR/fcnHi2ZNrXX7x
+      QUtQe7M9tDIKu48w//InnZ6VpAqjGShWxcSzR6UB/YoGe5ytHS6MrXaormfBg3VW
+      tDoy2MS83W8pweS6p5JnK7S5AgMBAAEwDQYJKoZIhvcNAQEFBQADgYEANyVg6EzE
+      2q84gq7wQfLt9t047nYFkxcRfzhNVL3LB8p6IkM4RUrzWq4kLA+z+bpY2OdpkTOe
+      wUpEdVKzOQd4V7vRxpdANxtbG/XXrJAAcY/S+eMy1eDK73cmaVPnxPUGWmMnQXUi
+      TLab+w8tBQhNbq6BOQ42aOrLxA8k/M4cV1A=</value>
+    </property>
 
+The above property holds the KnoxSSO server's public key for signature verification. Adding it directly to the config like this is convenient and is easily done through Ambari to existing config files that take custom properties. Config is generally protected as root access only as well - so it is a pretty good solution.
\ No newline at end of file

Modified: knox/trunk/books/0.7.0/config_ldap_authc_cache.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/config_ldap_authc_cache.md?rev=1724836&r1=1724835&r2=1724836&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/config_ldap_authc_cache.md (original)
+++ knox/trunk/books/0.7.0/config_ldap_authc_cache.md Fri Jan 15 15:24:45 2016
@@ -21,98 +21,97 @@ Knox can be configured to cache LDAP aut
 caching mechanisms and has been tested with Shiro's EhCache cache manager implementation.
 
 The following provider snippet demonstrates how to configure turning on the cache using the ShiroProvider. In addition to
-using org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm in the Shiro configuration, and setting up the cache you *must* set
-the flag for enabling caching authentication to true. Please see the property, main.ldapRealm.authenticationCachingEnabled
-below.
-
-
-              <provider>
-                  <role>authentication</role>
-                  <name>ShiroProvider</name>
-                  <enabled>true</enabled>
-                  <param>
-                    <name>main.ldapRealm</name>
-                    <value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>
-                  </param>
-                  <param>
-                    <name>main.ldapGroupContextFactory</name>
-                    <value>org.apache.hadoop.gateway.shirorealm.KnoxLdapContextFactory</value>
-                  </param>
-                  <param>
-                    <name>main.ldapRealm.contextFactory</name>
-                    <value>$ldapGroupContextFactory</value>
-                  </param>
-                  <param>
-                    <name>main.ldapRealm.contextFactory.url</name>
-                    <value>ldap://localhost:33389</value>
-                  </param>
-                  <param>
-                    <name>main.ldapRealm.userDnTemplate</name>
-                    <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
-                  </param>
-                  <param>
-                    <name>main.ldapRealm.authorizationEnabled</name>
-                    <!-- defaults to: false -->
-                    <value>true</value>
-                  </param>
-                  <param>
-                    <name>main.ldapRealm.searchBase</name>
-                    <value>ou=groups,dc=hadoop,dc=apache,dc=org</value>
-                  </param>
-                  <param>
-                    <name>main.cacheManager</name>
-                    <value>org.apache.shiro.cache.ehcache.EhCacheManager</value>
-                  </param>
-                  <param>
-                    <name>main.securityManager.cacheManager</name>
-                    <value>$cacheManager</value>
-                  </param>
-                  <param>
-                      <name>main.ldapRealm.authenticationCachingEnabled</name>
-                      <value>true</value>
-                  </param>
-                  <param>
-                    <name>main.ldapRealm.memberAttributeValueTemplate</name>
-                    <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
-                  </param>
-                  <param>
-                    <name>main.ldapRealm.contextFactory.systemUsername</name>
-                    <value>uid=guest,ou=people,dc=hadoop,dc=apache,dc=org</value>
-                  </param>
-                  <param>
-                    <name>main.ldapRealm.contextFactory.systemPassword</name>
-                    <value>guest-password</value>
-                  </param>
-                  <param>
-                    <name>urls./**</name>
-                    <value>authcBasic</value>
-                  </param>
-              </provider>
+using `org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm` in the Shiro configuration, and setting up the cache you *must* set
+the flag for enabling caching authentication to true. Please see the property, `main.ldapRealm.authenticationCachingEnabled` below.
+
+
+    <provider>
+        <role>authentication</role>
+        <name>ShiroProvider</name>
+        <enabled>true</enabled>
+        <param>
+            <name>main.ldapRealm</name>
+            <value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>
+        </param>
+        <param>
+            <name>main.ldapGroupContextFactory</name>
+            <value>org.apache.hadoop.gateway.shirorealm.KnoxLdapContextFactory</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory</name>
+            <value>$ldapGroupContextFactory</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory.url</name>
+            <value>ldap://localhost:33389</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.userDnTemplate</name>
+            <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.authorizationEnabled</name>
+            <!-- defaults to: false -->
+            <value>true</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.searchBase</name>
+            <value>ou=groups,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <param>
+            <name>main.cacheManager</name>
+            <value>org.apache.shiro.cache.ehcache.EhCacheManager</value>
+        </param>
+        <param>
+            <name>main.securityManager.cacheManager</name>
+            <value>$cacheManager</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.authenticationCachingEnabled</name>
+            <value>true</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.memberAttributeValueTemplate</name>
+            <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory.systemUsername</name>
+            <value>uid=guest,ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory.systemPassword</name>
+            <value>guest-password</value>
+        </param>
+        <param>
+            <name>urls./**</name>
+            <value>authcBasic</value>
+        </param>
+    </provider>
 
 
 ### Trying out caching ###
 
 Knox bundles a template topology files that can be used to try out the caching functionality.
-The template file located under {GATEWAY_HOME}/templates is sandbox.knoxrealm.ehcache.xml.
+The template file located under `{GATEWAY_HOME}/templates` is `sandbox.knoxrealm.ehcache.xml`.
 
 To try this out
 
-cd {GATEWAY_HOME}
-cp templates/sandbox.knoxrealm.ehcache.xml conf/topologies/sandbox.xml
-bin/ldap.sh start
-bin/gateway.sh start
+    cd {GATEWAY_HOME}
+    cp templates/sandbox.knoxrealm.ehcache.xml conf/topologies/sandbox.xml
+    bin/ldap.sh start
+    bin/gateway.sh start
 
 The following call to WebHDFS should report: {"Path":"/user/tom"}
 
-curl  -i -v  -k -u tom:tom-password  -X GET https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+    curl  -i -v  -k -u tom:tom-password  -X GET https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
 
 In order to see the cache working, LDAP can now be shutdown and the user will still authenticate successfully.
 
-bin/ldap.sh stop
+    bin/ldap.sh stop
 
 and then the following should still return successfully like it did earlier.
 
-curl  -i -v  -k -u tom:tom-password  -X GET https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+    curl  -i -v  -k -u tom:tom-password  -X GET https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
 
 
 #### Advanced Caching Config ####
@@ -209,4 +208,4 @@ for the ShiroProvider:
         <value>classpath:ehcache.xml</value>
     </param>
 
-In the above example, place the ehcache.xml file under {GATEWAY_HOME}/conf and restart the gateway server.
+In the above example, place the ehcache.xml file under `{GATEWAY_HOME}/conf` and restart the gateway server.